Amortizing the Costs of Software and Licensing Fees

One of questions I am sometimes asked as a Tech Advisor is “How do we amortize software development costs?”  Or “How are we going to recoup the costs of software development?”  The answer is, if the business is doing well, software development costs are typically paid off quickly.  As a prime example, you only have to look at the stock valuations of the big software companies in relation to size and cost of their workforces.

The big software companies are experts at developing software.  Companies that don’t specialize in software can do very well as long as they hire the right consultants and employees, or put together excellent teams of developers.

In simple terms, software developed “in-house” or by hired consultants is owned by the company.  This means that the total cost of development is a one-time expense.  Obviously, there are issues of hardware costs, hosting or cloud costs, operations, maintenance, upgrades, new features, and so on, which must be included in the budget.  Nevertheless, once developed, the software can simply be a part of operations.

It is often shocking to learn about the advanced age of major software systems in huge organizations.  The core code may be untouched for many years.  The hardware may be old as well, like old IBM systems that run reliably for years.  A major reason for the Y2K crisis was that the original developers couldn’t conceive that the software would last so long.  As the saying goes, “If it ain’t broke, don’t fix it.”  Another reason for software longevity is actually the lack of longevity of the original developers who really understand the software’s functionality, whether through retirement, changing jobs, or passing away.

Our contention, therefore, is that, as long as the business thrives, the software costs will be easily absorbed.  In fact, software can offer one of the most profound returns on investment that can be had.

There is a cautionary note we want to sound here.  Often software is sold based on a recurring licensing fee.  Depending on the size of the business, these fees may be relatively modest, easily absorbed in the costs of doing business – or they can become onerous, a drag on profitability, even viability.

Recently I learned about a mid-sized sales and distribution company whose operations were tied to a legacy ERP system.  For normal business to take place, they needed about 150 seat licenses, at a cost of $2,000 per seat annually.  Since they began with the system, technology has evolved.  Today there would be no need for 150 “seats.”  The operations performed could potentially be combined into a more modern architecture, such as service bus, which could potentially handle many workers, perhaps all 150, at a single connection point.  However, the ERP company would not willingly give up that $300,000 annually to provide such an interface.

In my experience, sometimes companies opt to purchase a platform with annual licensing fees, in order not to pay for custom development up front.  There are some advantages to this approach:

  • The platform will have “built-in” many of the capabilities that are planned for the system.
  • The initial cost will be lower with the platform.
  • Customization of the platform will be quick, so the system will come online more quickly.

The downside is of course that the licensing fees are ongoing.  The are other issues as well, however.  I recently spoke with the owner of a small business, who, years before, had opted for a platform solution, against my advice at the time.  Currently they were up against a wall.  The platform was software-as-a-service.  Their version was going to be discontinued shortly, forcing them to upgrade.  Unfortunately, all their customizations, built on the old platform, would be gone with the upgrade.  They were in a quandary.  My only advice was that they should try to engage the original developer to replicate the functionality of the earlier customizations on the latest platform version.  They were very concerned about the time, disruption, and costs involved.

Another issue for this company was their plans for the system.  Although this was a small business, the owners had excellent industry contacts.  Their initial vision was to build a system that they would own, and which could be marketed to other companies, putting them into the software business.  Since nothing in their vision had been done before, and it offered vast efficiency improvements over the way things were done in the industry, this plan did not seem totally farfetched.   What’s more, the cost of building custom could have been recouped in a few months, based on a modest assessment of the productivity increase.  However, once they decided to go with a platform with its own licensing fees, versions, and upgrades, they didn’t end up with a system that had any value outside their own office.

There is no easy answer when evaluating a system that involves licensing fees.  Consider, for example, that applications running in the cloud on AWS or Azure do involve fees.  However, the savings in infrastructure and the staff to maintain software applications often mean that these fees are a bargain.  Also, such fees are usage-scaled, so that the client pays for actual processing that goes on, rather than paying per seat – which, in these times, seems fairer.  Also, you can deploy in the cloud, but still own your application.  Should a company wish, it could move its cloud application to its own server farm.

Licensing fees are sometimes enforced with measures that may be viewed draconian.  Oracle has such a reputation, whether deserved or not.  At very least, for companies to follow the licensing requirements exactly may be quite burdensome.

Companies may need to buy software systems of platforms with rather heavy licensing fees.  In some cases there may be better alternatives.  Licensing fees are not always “bad,” but there may be some drawbacks that aren’t obvious to prospective licensees.

Copyright © 2018 Patrick D. Russell

Posted in Software Price/Value | Comments Off on Amortizing the Costs of Software and Licensing Fees

CUDAfication: NVIDIA GPU Parallel Processing for Data

NVIDIA’s GPU (graphics processing unit) technology originated to support the intensive video processing required by increasingly “realistic” computer gaming.  We can explain this technology in simplistic terms.  Achieving an immersive quality in a video game requires very rapid transformations of all the millions of pixels on the monitor screen.  While the values are different for each pixel, and the transformations applied to a given pixel may vary, the calculation logic is the same for each pixel.  More importantly, the result of a calculation for one pixel is independent of the calculation for any of the other pixels.  Thus, these separate calculations can be done in parallel by thousands of processors that populate data for all pixels with extreme speed for each video frame.  This description is admittedly an oversimplification, but it gives a general idea of the power of parallel processing.

Compare this with the traditional CPU (central processing unit), as in the average desktop computer.  With a CPU, each of the pixel calculations would have to be done sequentially, rather than in parallel.  A CPU simply could not handle the processing demands in many modern video games.  Nowadays the latest CPUs offer some parallelism with multiple cores (perhaps 24 in high-end CPUs).  In contrast, the latest NVIDIA Titan V processor has 5,120 cores.  Most software programs for CPUs are written using sequential logic – whether or not they could be parallelized in whole or in part.

NVIDIA has for years been at the forefront of GPU development.  As scientists and developers began to anticipate the promise of parallel computing in many other areas besides video games, NVIDIA developed the CUDA platform, first released in 2006, enabling developers to write software in C or C++ for an NVIDIA GPU.

The ability to use parallel processing is not a given.  There are processing problems that are called “embarrassingly parallel” – meaning that each data element is treated the same, and the values in each data element are independent of any other (as we described above for video processing).  For example, if you have a two-element array of product SKUs and product prices, and you want to apply a 5% discount across the board, this is an embarrassingly parallel problem: discounting the price for each element is independent.  Suppose you have a three-element array of all product SKUs, product Categories, and prices, and you want to discount the price of all Raincoats by 10% (assume RAINCOAT is a Category or has a Category Code in the array).  This is still an embarrassingly parallel problem, even though only Raincoats will be discounted.  The logic is: if the SKU is a Raincoat, apply the discount to price, otherwise not.  Note that order of Categories in the array list is irrelevant.  No sorting is required, and the logic can be applied “simultaneously” to all elements in the array.

However, many processing problems are not embarrassingly parallel.  For example, if you want to calculate the average price for all SKUs or all Raincoats, you normally go through each element one-by-one and keep a running total, and then divide that by the number of elements found.  The result at each element depends on a cumulative result from all the prior elements.  This cannot be done in parallel but requires sequential logic.

One of the major challenges in CUDA programming is how to handle problems that are not embarrassingly parallel.  This requires some clever tricks and techniques.  A problem may be broken down into pieces, where parallelism can be applied, and the pieces are then reassembled in various ways.  Interestingly, some of the great CUDA programmers, typically those who write CUDA libraries, have been able to provide functionality that is easily 20 to 100 times faster than the benchmark CPU programs, even though the logic would seem to be fundamentally sequential.

NVIDIA GPUs have become the premier platform for:

  • Supercomputing.  A present-day supercomputer may comprise thousands of GPUs.
  • Advanced scientific research and programming.  Specialty libraries facilitate research with previously unheard-of speeds.
  • Artificial Intelligence (AI).  GPUs have facilitated Neural Networks and Deep Learning with unprecedented performance.  The uses for AI are growing daily.
  • Blockchain.  GPUs can greatly accelerate the intensive calculations required for blockchain – recently leading to a doubling or tripling of prices on many GPU boards.

This summary doesn’t begin to do justice to the uses and applications of NVIDIA GPU technology.  For example, NVIDIA AI is not only at the heart of the self-driving car technologies, it is also being used in many new scientific applications; for example, AI has recently been used to greatly accelerate the search for new planets.

I want to highlight one area that gets very little attention – where GPU parallel processing can deliver huge performance boosts (typically 100 times speed improvement or better) – and that is in data processing and manipulation.  I have had the opportunity to work on several such projects.  Data processing in CUDA isn’t a simple given.  It requires specialized techniques custom-designed to fit specific challenges.  Starting with the CUDA libraries ModernGPU and CUB, which provide essential core functionality, the developer can design and build custom algorithms for data.  I have also developed some useful extensions to these libraries, such a multi-criteria sorting, which are significantly valuable working with data.  Data consolidation/compression, transferring text such as UTF-8 CSV into arrays, running queries on unsorted data, running multiple queries simultaneously – are among the types of processing that can be done – and all these processes can run virtually instantaneously!  Each scenario is different and may require a creative design to solve.  However, if speed and performance are crucial, CUDA may be the best solution, and perhaps the only viable one.

Copyright © 2018 Patrick D. Russell

Posted in NVIDIA CUDA | Comments Off on CUDAfication: NVIDIA GPU Parallel Processing for Data

The Sources of Business Intelligence

Business Intelligence (abbreviated BI) is one of the hottest areas in Information Technology today.  In fact, a number of specialists in other areas, such as predictive analytics, are rebranding themselves.  It isn’t that they are doing anything fundamentally new.  Rather it’s that their potential clients, who tended to glaze over at the old labels, are now interested in and receptive to “business intelligence.”

This is a short blog entry, and, as such, in no way purports to cover the topic of BI.  Moreover, BI can be approached from a number of different directions – some radically different from others.  All we want to do here is to provide an entrée into some of the areas covered under the rubric of “business intelligence.”

By its name BI means intelligence in and for business.  Thus it is not just the collection of data or information, but rather implies “intelligent” processing, understanding, and usage of that information.  The intelligence part will lead to useful interpretations, judgments, correlations, and conclusions – and provide evidence and support for better decisions, namely those which lead to increased revenue and profits and which reduce and prevent errors and missteps.

First, let’s break BI down into two parts:

  1. The sources of the information.
  2. The analysis or processing of that information.

This is a somewhat artificial distinction, as collection and processing may be intertwined at least in part.  However, it will give us a place to start investigating.  As the title of this blog indicates, we will focus on some of the sources of BI.  The different sources will give us a broad categorization for BI applications.

Surveys

Surveys are a widely used and potentially very effective method for collecting business data.  However, there are two important caveats with surveys:

  • In order to be maximally useful and effective they must be properly designed.  It takes genuine expertise to put together such a survey.
  • The proper interpretation of survey results also requires expert analysis with sophisticated math and statistics.  The “obvious” conclusions of the survey may be wrong, and the deeper, more correct (read: profitable) conclusions may even be counterintuitive.

There is also a great deal of variation in surveys.  For example, they may be anonymous, or else the respondent identities may be known and therefore connectable with business or personal profiles.  The appropriate treatment and weighting in each case, along with how best to utilize profile information when available, are also important topics calling for expertise.  In addition, time factors and timeliness are key considerations.

BI experts specializing in surveys are sometimes able to enter the scene after a survey has been completed, even if it hasn’t been done under optimal conditions, and draw very useful conclusions.

When good surveys and the proper analysis, the results can be staggering (translate: a huge increase in profitability).

One of the advantages of surveys is that they are a relatively lightweight approach.  They don’t require a major technology deployment, and a well selected sample can represent much larger numbers.

Social Networking Analysis

In my earlier blog, The Overlooked Power of Social Networking, I noted:  “The social media are great repositories of sentiment, and this sentiment can be mined and analyzed.  What’s more, because the social media are highly dynamic, changes in sentiment can be monitored with virtually no lag time.”  In addition, I indicated that social networking analysis can supplement the survey as a vehicle for garnering business intelligence.

Surveys and social networking analysis have a great deal in common.  They both offer the means to reach out beyond the organization to the customer and prospect, or to any other strategic group or segment outside the walls.  They both can be targeted to specific demographics as desired.  Of course technically they involve different ways and means, and require different types of expertise to facilitate.  However, once the survey or social networking data are collected, they can run through various types of analytics to yield valuable BI.

Survey and social networking data are both a matter of statistics.  Surveys often involve subjective evaluative responses.  They will often give range breakdowns (such as annual household income) that may or may not be optimal.  Poorly designed questions may be leading, designed to garner positive answers, while skirting very specific nagging problems or failing to allow expression of legitimate complaints.  Many responses are not totally honest.

The social media also have their limitations.  The ratio of valid comments to “junk” may be low.  They also may represent a skewed demographic.  The structure of the information is minimal or non-existent.  The sheer quantity of data, especially from Twitter, can be daunting.  It is no simple feat to sift through all this to glean useful conclusions.  Nevertheless, conclusions will start to emerge almost immediately even when very simple techniques are used.  The more refinement applied, the more focused and interesting the results become.  Statistical studies, artificial intelligence, and natural language processing are among the advanced techniques that can be used.  The social media also have a “primitive” or “uncensored” quality reflecting a raw edge of sentiment difficult to extract from anywhere else.

A big advantage of social networking over surveys is that the data are ongoing and real-time.  For example, you can measure the increase in volume and change of sentiment about a product immediately following a Super bowl commercial.

Business Process

In my two books, The Tech Advisor and Workflow 101, I discuss workflow – or the software automation of organizational process – at some length.  However, workflow automation is built around the collection, retention, management, and utilization of data.  It is “data driven.”  A brand new workflow system will use existing data and there will be new data created for it.  Furthermore, workflow generates its own data as it runs: such as process status, employee performance, and throughput statistics.

For example, workflow automation for customer service will be a repository for all kinds of data, about various products and the issues that plague them, and even information that could lead to improvements in service rep training and support tools.  Such data have value throughout the organization – for manufacturing, quality control, human resources, management, training, project planning, and so on.

In large enterprises there are often many workflow automations, as well as databases, applications, and systems of various kinds, typically of heterogeneous origin.  There is an organizational impetus to connect and integrate these systems, either in a piecemeal or in cohesive, overarching way.  There are usually a number of motivations for doing so, including:

  1. Streamlining operations.
  2. Eliminating redundancies in activities and processes.
  3. Connecting valuable data and resources to where they are needed.
  4. Centralizing systems around communications buses or service brokers.
  5. Bringing direct process control and governance out to higher level management.
  6. Bringing data and information about process, both real-time and historical, out to higher level management.

Motivation 6 is the one particularly associated with BI.  A number of software companies that develop workflow, database, and other enterprise software platforms also sell systems specifically designed to cull information about process and bring it out for centralized dashboards and reports.  These systems may also facilitate the other five motivations.  They may, for example, allow business rules to be defined, thereby adding new control features as well as streamlining process.

From the BI standpoint, complex, combined workflow automations can yield very rich and layered data, making it possible to skim the cream off the top of organizational process.

General Operations

We can enlarge upon the idea of business process to include all organizational operations that generate or operate with computer data.  Large organizations will often have big systems handling major portions of their operations, such as ERP (enterprise resource planning), CRM (customer relationship management), supply chain management, sales systems, marketing systems, and so on.  While these do fall under the rubric of “business process” as we discussed in the last section, we consider them separately here, as they are often large, self-contained systems.  As such, they will often have their own centralized databases which can be accessed as sources of BI.

Thus we have a slightly different emphasis these operational systems than with business processes.  In the latter, we are looking at ways to pick up process data at key junctures.  In the former, we are simply connecting into the system’s data repository.

Key Performance Indicators

Many BI specialists like to establish key performance indicators (KPIs) as a way of monitoring organizational performance in various areas.  KPIs are often chosen in conjunction with a management framework such as the Balanced Scorecard developed by Kaplan and Norton, which has been a major tool for defining and driving organizational improvement in the last two decades.  KPIs are a very carefully chosen set of metrics that provide a snapshot of the state of the whole organization.  More than that, they are pressure points, chosen as strategic targets for the application of process improvements.  Defining KPIs requires art, experience, and expertise – to achieve both an accurate representation as well as a tableau for quick and meaningful results.

KPIs are typically connected with the workflows or automated processes, which we discussed in the last section.  We sometimes speak of “instrumenting” business processes, by introducing points of measurement that are “hooked up” in some manner to a central performance monitoring station.  Think of this like hooking up gauges wired to a central console.

To the extent that KPIs are set up through software – and are not simply instituted as “manual” procedures – they may piggyback on existing process automation.  Alternatively, they may be created ad hoc, perhaps as very lightweight workflows that do little more than monitor and bring out data at key points.

Note:  KPIs for BI are very similar to the metrics used to verify compliance – Sarbanes-Oxley, for example.  Here the experts come in and look at the entirety of organizational processes and risks, and select a number of critical points that address the principal risks and collectively demonstrate compliance.  These points (often referred to as “controls”) are then monitored on an ongoing or regular basis.  Ideally the monitoring will be realized by software automation, and will include workflow features such as reminders, alerts, and escalations – to prevent any failures.  Even though the immediate goal is compliance, these interventions can be included under the rubric of BI.  The procedures will not only ensure compliance (a critical business goal), but may very well lead to other improvements.

Data Mining and Data Warehousing

Data mining is a catchall term for any form of digging through a lot of data to get the gold.  In general, we can consider the “gold” to be BI.  The big database where the gold is to be found might be, for example:

  • An internal company database.  For example, an auto company might have a database of all new cars purchased over the last twenty years with complete details on the models, the financial history of the transactions, profiles of the purchasers, warranties, dealer maintenance history, etc.
  • An external database that is purchased, licensed, or made available by a business affiliate.  For example, the same auto company may purchase a database from insurance companies, big auction houses, CarFax, and other sources, to track their cars after they go outside their dealer network.
  • Public databases, or data collected by universities and research organizations.  Some of these data may be available for free.  Often they may be accessible only through specialized interfaces, perhaps over the web.

Often data mining is outsourced to companies that do that as a specialty.  These companies provide interfaces to their customers allowing them to run queries and get results, but the maintenance and processing of the data are kept at the data mining company.  These companies also take on data mining projects under contract.

Typically we think of data mining as going along with standard business activities such as market research, product development, and customer relationship improvement.  However, there are also organizations for which data mining is essentially a core process.  Consider online search tools like Google or Yahoo.  Even today’s large-scale online stores are fueled by continuous data mining.  Consider Amazon.com, which finds matching products, suggestions that match user profiles, and so on.

Data warehousing refers to collecting data from various sources (from inside the organization and sometimes from external sources as well), and putting them into a large “warehouse” from where they can be mined.  Data warehousing is particularly associated with BI, as one of the primary motivations for collecting data in a warehouse is to perform analysis, do forecasting, and provide decision support – based on the ability to look at the broad spectrum of information.

Data warehousing is a major area of IT, where there are numerous theories, methodologies, products, and services to be found – all well beyond the scope of this discussion.

Big Data

“Big data” is a recent term referring to a collection of data so large that it cannot effectively be handled by conventional database management systems.  Data capture, processing, storage, search, sharing, analysis, and visualization become enormous challenges as the amount of data grows.  Data are growing at a staggering rate.  In my prior blog entry, Green IT: Why it is so important., I discussed some of the issues involved in the big data phenomenon.

In the mid-1990s I came up with an ad hoc principle: the degree of difficulty in managing data grows by the square of the amount of data.  This was not meant to be a quantitative fact, but rather was based on observation of numerous situations.  To me, it was actually counterintuitive; I would have thought that dealing with a larger amount of data would require only incrementally more work, but my experience proved to be very different.  Big data challenges are like this as well.  As data grow, the traditional hardware and software platforms cannot be scaled up to handle them.  New platforms based on large-scale parallelism, such as Apache Hadoop, are being deployed and further developed.  If you think about the processing involved in the Google search engine, sifting through gigantic quantities of data and returning query results almost instantly, you get a feel for big data.

Also there is something like the compound interest effect going on with big data.  As we observed with business process, the automation of process produces more data, and the extraction of intelligence from automated process produces still more data.  In fact data mining from any source produces more data.

Another factor that leads to data growth involves the inclusion of more specific information, such as timestamps, geographic locations, and source/trace paths.  (This information is found in many social media, for example.)  The drive in BI analytics to identify meaningful correlations makes it worthwhile to capture as much of this sort of contextual information as possible.  As we have seen, not only does this lead to larger datasets, but the analytics themselves generate even more data.

Despite the challenges, big data is an important source of BI.  Large organizations are now working with it and are creating huge repositories of useful information.  Anyone who does a Google search, shops at a major online store, or uses a large social networking site, is the beneficiary of ever-evolving big data technology.  Moreover, scientific research projects, such as weather modeling, astronomical surveys, and genome decoding, are making great strides thanks to new big data technologies.  A great deal of research on big data is going on today, looking for new and better ways to understand it, handle it, and take advantage of it.

Rich Data

The concept of rich data goes back twenty years or so.  Back then it was used to describe the document-centric data model used in its Lotus Notes® software.  In the typical relational database model the individual datum is generally small – one cell in a row-column (record-field) structure.  In contrast, rich data is more of a kitchen sink model.  The “datum” is quite open-ended.  It can include conventional data records, rich text documents, files, images, audio, video, and just about any other kind of digitized information.

For the sake of simplicity, we can consider rich data as covering the entire range of digital formats.  A critical issue nowadays for both small and large companies is the ability to locate files and documents.  A large company recently started a major initiative with the goal of saving each employee an average of five to ten minutes a day in searching for internal information.  Of course Windows Explorer will search into text files and Office documents.  SharePoint and other tools add PDF files to the searchable list.  However, what happens when the organization has files on hundreds, perhaps thousands of servers?  What if the access permissions block out those who need the information in many cases?  There are many issues here.

Of course not all files are Office documents or PDFs that can be searched as text.  Many software applications generate unique file formats.  Even if these are essentially textual, the text may not be retrievable by standard search tools outside the native applications.  Even then, the native applications may not be very good at locating native files, much less at searching through them.

Furthermore, search-ability means more than just finding text.  What if you have a whole bunch of image files (like JPGs or PNGs) and you want to find pictures with cars in them, using advanced pattern recognition?  What if YouTube wants to scan video files to find which ones are violating copyrights?  What if you want to find audio files matching a particular voiceprint, or containing the word “Hadoop”?  What if you want to scan streaming content, rather than static files?  For example, you want to scan all major cable network feeds to identify musical performances or to profile commercial content.

Rich data, like the other sources, are extremely important reservoirs of potential BI.  The tools are becoming increasingly sophisticated, but demands are growing, and the challenges are daunting.

The Analysis and Processing of BI

This blog entry is limited to a brief discussion of some of the important sources of BI.  The analysis and processing that turns the data into intelligent information, decision support, actionable results, or valid system inputs might involve many diverse approaches.  These might include relatively simple analysis, more complex logic and algorithms, advanced calculations and statistics, and even artificial intelligence.  The end result might be a set of reports, a dashboard, a huge console of many screens in a war room, or a major strategic initiative for organizational improvement – or the results may simply feed automatically into other systems that consume them.

To work effectively with BI requires a range of management consulting, IT, and math and statistics skills.  To use BI to effect significant organizational change requires a rich and powerful methodology.

The Road to Business Intelligence

BI is a vast subject area, with topics that are far beyond our scope here.  Our goal has been to shed some light on BI by distinguishing some of its sources.  While most readers will be familiar with surveys and data mining, they may not be familiar with some of the other sources, at least not in terms of how they connect with BI.

There are a number of common features among the different BI sources, and there is of course some overlap.  For example, social networking and rich data sources may turn into big data.  Nevertheless, these different sources of BI represent some of the roads into BI which may be particularly useful for specific needs and applications.  If you are an organizational leader, perhaps this discussion will stimulate your thinking about ways in which you can better exploit BI for profit, sustainability, quality, and consistency.

 Copyright © 2012 Patrick D. Russell

Posted in ARCHIVE (Posts prior to 2013), Business Intelligence, Social Networking, Workflow | Comments Off on The Sources of Business Intelligence

Canned, Customized, or Custom: Which Software Choice is Right for You?

Businesses looking for specialized software naturally want to save money in their purchases.  Do you go for a “canned” system, or hire people to build something custom for you?  The answer is that it depends on the nature of what you are trying to accomplish, and what products and systems are available to meet your needs.

Note that we are talking here only about business-level software, software that is deployed over the entire business, or a section of the business – interconnecting workers, establishing collective information hubs, and automating workflows.  Such software is multiuser, and will typically provide different functionality and access levels to users depending on their positions and roles.  We are not talking about specialized workstation software or the usual office suites, browsers, and the like.  Those are by and large individual productivity tools, and the challenges we are discussing here generally do not apply to them.

“Canned” business-level software systems represent a wide variety of offerings and technologies, for example:

  • “Big ticket” or “monolithic” software packages designed for the enterprise that cover major, but fairly standard areas of business, like ERP, human resources systems, supply chain management, accounting, customer relationship management, and others.
  • More specialized big packages/products that are often targeted to specific industries, such as medical and pharmaceutical systems, tax systems for accountants, etc.  These are designed to automate major chunks of activity within those businesses.
  • Web-based template systems, such as content management and e-commerce systems, which allow users to build out their own sites with relative ease.
  • Software-as-a-service or cloud systems.  These systems in general promote flexibility through configuration – little or no software coding required.

What all these systems have in common is that they provide ways to design and deploy a business software system without having to write custom code.  Instead, once the products are installed (or the services subscribed), they are set up for a particular business by configuration: making selections from menus, entering information in fields, adding users, and so on.  The skills of a custom programmer are not required, and therefore they can be set up less expensively.  In many cases a reasonably technically savvy purchaser can do it him/herself.  Granted, in the big ticket systems, with deployments that affect hundreds or even thousands of workers, there is so much involved that an outside team is usually necessary, and often some custom coding will be required.  Nevertheless, there is still the understanding that the system already includes the major functional blocks, and that nothing needs to be built from scratch.

Our initial question can best be answered by asking another question:  Given what you want to do, both immediately and for some time into the future, will a package or product enable you to do it in a straightforward way?  We are not looking for 100%.  Often the old 80-20 or 90-10 rules will give us the right answer.  If it is going to cost way more to approach 100%, it may not be worth it, especially if the 80% or 90% will get you moving forward in a positive way.  You can make real money on the 80%, which could fund something more complex in the future.

There is a middle ground between a canned package and a completely custom solution, and that is customization.  Many packages provide some access points where software coders can come in and build capabilities that cannot be achieved by configuration alone.  Customization does require software development skills.  Among the systems and packages that offer optional customization, there is a very wide range in terms of how much it is possible to do, how easy it is to do, and so on.  You do not have carte blanche to make modifications, as the package or system itself is determining the major functionality.

The Middle Ground: Customization

Sometimes vendors want to have it both ways.  Salesforce.com, a Software-as-a-Service set of offerings, has made it a central theme in their advertising that they have put an end to the need for software coding.  They have a logo showing the word “SOFTWARE” inside a red circle with a diagonal bar (like a no left turn sign).  In other words, when you license their products you can do everything you need by configuration alone.  Nevertheless, the company promotes its extensive toolset for software developers to customize its cloud offerings, including a comprehensive, object-oriented programming language.  I guess software isn’t completely obsolete after all!

The ability to customize a standard product may appear offer the best of all worlds: all the features and functionality of a comprehensive package coupled the ability to make it do whatever you want.  However, as we noted above, the ability to customize varies considerably.  You cannot assume that just because there is a developer’s toolkit or an API (application programming interface), you will be able to make the product do exactly what you want.  These are case-by-case choices:  What are you getting from the core product without customization?  What do you want to add or modify beyond its core capabilities?  What do the developer tools allow you to do?  How difficult will it be to accomplish what you want in that environment?

When you are planning for a system and have not yet decided on a product or package or a custom solution, it is important to consider carefully the complexity and uniqueness of the process you want to automate.  If your process is quite unique or specialized, or has many steps or “moving parts,” you may want to investigate a custom solution, rather than a product, package, or platform.

Here are some Considerations to keep in mind when you are choosing between customizable package or platform, and a custom-built system:

1.      Big Ticket = Industrial Strength.

The “big ticket” systems, like SAP, IMB Websphere, Peoplesoft, Oracle, and so on, are very complex and expensive, and an organization must make major commitments of time and resources to deploy them, as well as making major shifts in process.  However, they are attractive to large organizations because they are “industrial strength.”  This means several things.

  • They have an established track record of reliability.
  • Most have multiple modules covering various major business areas.  Over the years they have grown rich feature sets.  This offers a kind of one-stop shopping for major corporations.
  • They have large consulting and support organizations behind them, and there are multiple vendors and partners to choose from.
  • There is a strong level of confidence that the companies and products will be around for many years.
  • They typically run on a range of hardware and software operating systems, permitting an IT Department to stick with its preferred platforms.
  • Recently many of these products have added cloud versions, giving customers another option.
  • They scale up and out as needed.
  • They often include, or have available as options, software that connects with a number of other systems and databases.
  • They all provide relatively robust and rich APIs and customization tools.

Big ticket systems have earned a well-respected place in the software pantheon, and are often an excellent fit for large organizations.  They are the standard bearers for what we refer to as “enterprise software.”  Nevertheless, they don’t define the role model for business software.  There are several other important considerations, as we shall see…

2.      Is the value of the core technology overstated?

Salespeople will often recommend their products based on the importance of the core features and technology.  They may then point out that the additional features the customer wants can be added using customization tools.  They will sell “how much you get” in the core product, and minimize the degree of difficulty in adding the custom features.

This is in fact a rather common occurrence, and it can really skew the assessment of products.  About a year ago, we were acting as advisors for a company that wanted to make an arrangement with another company to license their technology.  The prospective buyer had been convinced that the technology in question would get them perhaps 80% of the way toward a specialized software technology they were developing.  We were very skeptical, and the seller was reluctant to have us involved in the deal at all.  Finally, we got a tour of the product’s features.  Our conclusion, as we had guessed, was that this technology might only get our clients 20% of the way to their goal, and the costs of building out from and interfacing to that platform would eat up any value in the 20%.  The clients were enticed by the prospect of partnering with a known entity, which was a value point for them.  We just wanted to make sure they knew what they were getting in making their decision.

In another instance I became familiar with, a company wanted to build out a very complicated workflow critical to their customer retention.  A salesman convinced them that they could build their system on a well-known customer relationship management (CRM) platform, and would get all the benefits of CRM, like managing their prospects, plus their workflow.  They did get their system built competently, but it was based on a tangle of customizations to the platform.

Fortunately for this company, things worked out reasonably well.  However, when I looked at how the system had been sold, I realized that the glamor of the core technology made them think they were getting a lot more than they did.  The CRM system was not a bad technology, but it got them nowhere close to where they needed to go.  Had the system been designed and developed “from scratch” the effort to get them the extras they were promised in the CRM would have been less than 5%.  In my estimation, a custom system would have been a more cost-effective way to go in the long run, and would have been a cleaner design.

3.       Pay now, or pay later.

Custom software is typically more expensive initially than a package, often considerably more.  Therefore if you can get 80-90% of what you want in a package, you should probably go that route.  There are plenty of systems and packages out there meeting a huge range of needs.  If you are not looking for customization at all, you are probably fine.

However, if you will be customizing a product or package to meet your goals, you may want to make a long-range calculation of your investment.  Maybe you will save 50% up front by customizing a product compared with a custom solution.  However, you will be paying for the product and the customization by developers.  Also, products may have annual licensing fees, which tend to increase.  You may also be limiting the value of your intellectual property.  Your software might be turned into a product to sell to other companies.  This can still be done based on customizations, but then you are selling an add-on, rather than a full product.

4.      Version sensitivity.  Upgrades are a problem.

Continuing our discussion of pay now/pay later, we can also point out version issues, to which customization of products/platforms are particularly sensitive.

In software, operating systems, standalone products, and custom systems seem to have a long lifespan.  Windows XP was released in 2001, and is still supported by Microsoft, and is the operating system of choice for many.  When the Y2K crisis was upon us, companies realized that they had hundreds or even thousands of programs that had to be checked that had been running for many years.

Version upgrades to a product tend to “break” customizations.  This is an issue programmers have been dealing with since the early days of computers and software.  The impacts of version upgrades on customizations run the gamut between: everything is fine as it is, a few small tweaks will take care of it, major re-coding needs to be done, and the customizations simply cannot be ported to the new version.

During my years at Cambridge Software Group in the mid-1990s, we sold products that were add-ons to Lotus Notes.  With every new version of Notes we would scramble to rewrite them.  By the late 1990s Notes had evolved to the point where our products were out-of-date.

Recently I spoke with a company that had commissioned a system to automate workflow a number of years ago that was designed and built by customizing a subscription product.  The initial development had proved challenging, but was completed successfully.  Subsequently they paid for, but did not implement several version upgrades to the product.  Why?  They knew that each version would break their customization and thus necessitate a fair amount of development effort to get the system working again.  Finally, after several years, support was being dropped for their version, forcing them to upgrade.  They soon realized that weeks of development (at expensive rates) were going to be needed to rework their customization.

Another downside factor for this company was that the regular version upgrades they had to defer most likely contained important bug fixes and useful enhancements.

Had this company originally opted for a custom-designed system, they would have had far fewer of these issues.  They may not have had to upgrade at all, or if had chosen to do so, it would have been at the operating system or programming language version level.  In relative terms these are usually far less traumatic.  As we noted earlier, custom systems can often run for many years without upgrades.

5.      Integration, interconnection: lateral customization can be good.

So far, my comments have not been favorable to customizing a product or package.  Yet there is a kind of customization that, generally speaking, is good.  It is generally straightforward, less sensitive to version upgrades, and can add a great deal of efficiency to the business.

Many products and platforms provide APIs that allow developers to connect them with other systems.  Businesses generally run on many software systems, and there can be real power in interconnecting or integrating them.  This makes for more extensive process automation and workflows, and brings valuable data to systems where it can analyzed or distributed.  For convenience, I refer to this as lateral customization.  Typically this involves getting data from one system into another, like an export-import, but automated.

Because lateral customization is often treated as a core feature by product companies, they will typically provide clean APIs, and will make an effort to make version upgrades as low-impact as possible.  Also, you are dealing with well-defined points of entry and exit, rather than changes to the fabric of the product.  Even if some coding is required for a version upgrade, it will usually be straightforward and well documented.  In addition, there may be standards that apply, such as ODBC for inter-database connections – meaning that you are working in a rational, well-supported world.

6.      Customization of core logic is not so good.

Unlike lateral customization, if the core logic of the product is being tweaked, this is more challenging.  Let’s say the product is designed to move tasks from A to B to C.  If you would like the product to move the tasks from A to B to C to D to E to F to G, this is a considerable extension of capabilities.  If you want it to move tasks from A to D and E in parallel and then conditionally to C or F, and then back to A, this is likely to involve core customizations.  How vulnerable these will be to version upgrades will depend on the product.  However, it is certainly a red flag calling for serious evaluation.  As we noted above, the value of the core technology may not be sufficient to justify the long-term costs and complications, even if there is an initial savings.

Developers know that whenever you are customizing a product or platform, you are dealing with various limitations, quirks, and bugs.  Sometimes these can be brick walls.  When you are doing custom work, there is much less of that to contend with.  Such limitations can add to long-term costs, which may call into question the ROI based on the initial cost savings.

When selecting for a customizable product, it is important to do further research.  If the customizations will be minimal, and almost everything can be done with configuration, there is only a small risk.  If a good deal of customization will be needed, be sure to assess the value of the core you are getting, as we noted above.  If you are satisfied, then make sure to talk with your software people about the quality of software tools provided for customization.  Some product companies are careful to support the migration of custom code that worked on older versions, even when their upgrades add new customization capabilities.  Some do a good job of documenting the upgrade path for custom editions.  However, not all are that friendly.

7.      Not your father’s custom software.

When I began work as a programmer in 1980 developing industrial control systems, we were working in assembly language with virtually no debugging tools, “burning” chips for every change, and writing pages and pages of code to do simple things like an integer divide.  We were also very constrained by processor speeds, especially challenging since much of the processing was real-time.  I remember spending days poring over a section of code just to find one more register to use for calculations during an interrupt, so I could avoid using much slower memory – and this was make-or-break to the viability of a product.  Despite all the challenges we managed to develop both the software and hardware for some extremely sophisticated products over a number of years.

Today the tools available to software developers have advanced by at least two orders of magnitude.  Computer speed, memory, and disk storage have ceased to be limitations for most developers.  They often don’t worry about efficiency and optimization any more.  We now have libraries of routines making it easy to do complicated things.  Our development, debugging, and code management environments are fabulous.  Software development still poses many challenges, and requires talent, creativity, artistry, and experience to do well.  However, those artists can now do things much more quickly, powerfully, and reliably.

What this means is that custom software is easier to develop, and the costs are approaching those of customizable products, certainly when analyzed over the long haul.  Also, due to all the improvements, custom software scales well.  What used to require a mainframe is now done on a small server box in the data center.  Developers can use readily available libraries, modules, and tools to build enterprise-caliber software, or web software that serves tens of thousands!

Also, experienced developers have a repertoire of code and routines they have done in the past, along with the know-how, which give them a leg up on new projects.

8.      Extensibility.

Custom software may cost more initially, but what happens when you want to enhance or extend your software?  New software capabilities can translate into new business opportunities.  Businesspeople often have wish lists of things they would like to do as software enhancements.

One of the beauties of custom software is that you are not boxed into the limitations and quirks of a platform when you want to add capabilities.  Your desired enhancements may or may not be easy to do.  However, unless you have selected a product or platform specifically because it will facilitate a planned enhancement, chances are it will be easier to do in a custom environment.  When custom applications are completed, developers are often aware of many “low hanging fruit” enhancements.  This is why it is often so difficult to tear programmers away from their code.  You hear them saying things like:  “In just a couple of hours, I’ll be able to add the … feature, which you weren’t planning on having until next year!”

Customize of Custom?

There is often a strong lure up front for customizable products because of a lower initial cost.  Hopefully, the considerations we have listed will help the businessperson avoid some of the perils involved.

Is it always a mistake to customize a product then?  By no means!  In fact, I would always argue that the proper sequence when you are looking to buy software is:

  • Is there a high quality product, package, or service that meets most or all my needs out of the box, and that only requires configuration?
  • If not, is there a high quality product, package, or service that can meet my needs by doing some customization?
  • If not, what are my options for hiring a programmer or a team to build a custom application for me?

Nowadays, there are so many packages and products out there, with so many niches covered, that finding a good fit for your business is easier than ever before.  However, the considerations above have shown how important it is to do your due diligence.  Resist the sales hype, the inflation of core capabilities and value, and the minimization of the challenges involved in customization.  Also, consider your long range aspirations and the overall costs over a number of years.

Don’t assume that a product will do what you want.

This is not an issue with customization per se, but it is relevant to the discussion.  If you find a package you think will work for you, don’t assume it will do the things you really need it do – just because it is a high-end package or is expensive.  People will often see a demo or a review and be completely wowed, and just assume the product will do the minor thing they are looking for.  Make sure you investigate and find out exactly how it will do what you need.

In the last couple years I have twice run into businesspeople who needed an accounting system to do split invoicing.  One CEO of a 150-person company was spending many hours each month to fix his invoices manually, because the expensive system he bought couldn’t do it.  Another individual who did do a careful investigation reported that he researched many systems before he found one that could handle split invoices, and it was quite expensive (but worth it for what he needed to do).  Perhaps today more products have those capabilities.

Ask for Expert Help

If you are unsure about products, packages, subscriptions, and customization, you can ask experts for advice.  Talk to some vendors first so you get a feel for the lay of the land, then find an expert who is not trying to sell you a product.  It is best to find one who has some familiarity with the types of products you are considering.  However, understand that even the “experts” may not be familiar with all the products you are considering.  There are so many thousands of products out there, and so many ways people want to use them, that even the experts will want to check in with colleagues, do research, grill the vendors, or do proofs of concept.  Consider hiring an expert to help you do your due diligence. This may seem like throwing away money, because you are not “getting” anything – whereas the vendors will come to your office and give you a free dog-and-pony show.  However, this relatively small up-front expense may save you from making a big, expensive mistake.

Preferred Systems and Platforms for Customization

Most software companies do work with customizable products and platforms, even if they specialize in custom work.  It might help the reader to understand how we at Russell Kennedy Partners look at those systems.  This is really a “personal” perspective that has evolved over a number of years.  There are many excellent companies that have very different specialties.  Therefore I offer this as an illustration, rather than as a recommendation.

  • Some packages that are highly specialized – tax software, for example.  It would be totally impractical to try to duplicate these in a custom design.  If they provide developer tools, then they can be incorporated into or integrated with another system.
  • Sometimes we work with systems we are targeting for data and information, like the major social networking sites.  We use the documented APIs to retrieve data so we can build the software we want.  The end-result may be custom work, but by necessity it is dependent on other major systems.
  • Sometimes our assignment is to build an add-on to a product or package for a client, perhaps to create a product, or perhaps to provide a valuable business connector.
  • We sometimes work with packaged systems, but we typically prefer those where we gain valuable features and that have excellent software tools for interface and customization.  As we have worked extensively with Microsoft products in recent years, we favor platforms like SharePoint.  It costs less than comparable third-party offerings, has a tremendous support network, and can be readily customized with sophisticated classes in Visual Studio (Microsoft’s development suite).  A platform like this, “close to the operating system,” provides a broader safety net than the average third-party product.

Copyright © 2012 Patrick D. Russell

 

Posted in ARCHIVE (Posts prior to 2013), Software Price/Value | Comments Off on Canned, Customized, or Custom: Which Software Choice is Right for You?

Software Decisions: The Chicken and Egg Problem

Businesses are often faced with the purchase or development of major software systems. We often refer to major business systems as “enterprise software” or “workflow” or “process automation.” There are many varieties of these big systems, but what they all have in common is that they represent unique challenges, especially to businesses which don’t have their own software project teams (which most don’t). We distinguish enterprise software from commonplace business applications, such as web browsers, email, and office suites – as well as from specialty applications such as CAD/CAM, video, animation, and so on. Both the commonplace and specialty applications are generally personal productivity tools or high-powered workstation products.

In contrast, enterprise software interconnects individuals and systems and orchestrates their activities. Unlike conventional applications, individual users may have completely different experiences when they interact with enterprise software, as the tools, information, and capabilities they work with will be tailored to their roles and positions in the larger process. However, generally the user interfaces, though varied, run on conventional personal computers (not high-powered workstations), sometimes in web browsers. What makes enterprise software a challenge is not (necessarily) its vast number of features, huge processing power, or complex algorithms.  Rather, it is challenging because of its highly interconnected nature tied to large-scale operations and major processes, and also because it is deployed in a unique business environment and must interact and integrate with pre-existing systems and databases. Therefore, it almost always requires a custom perspective, and can be very difficult for organizations to plan without expert guidance.

There is a gap in expertise that often happens when organizations plan large-scale software. (Large-scale is relative to the size of the organization, and even small companies can face these challenges, especially if they gain competitive edge through unique processes.) If the organization lacks internal resources to develop the system, it will first identify software vendors or consulting companies, and then will hire them, purchase systems, or both. In some cases the organization will hire new employees or contract workers to staff the project. These are all big decisions, with a lot at stake and high expectations for positive results and return-on-investment. However, the organization may not have adequate internal expertise to perform the due diligence necessary to make the best choices.

This is a chicken-and-egg problem, because the real expertise only arrives once the big financial commitment is made, which is too late to help the business when making the commitment — and too late to ensure wise use of its internal resources in the planning stages. Often high level managers put in large amounts of time and effort in the preliminary system planning. Much of this effort may be wasted or only marginally effective – and worse, can result in very poor technology selection, and ultimately a failed system, or else one that is overpriced and underperforms.

The traditional RFP (Request for Proposal) illustrates the problem. RFPs are often used to solicit proposals for big, custom jobs. Yet they are often of very poor quality, and may omit critical pieces. This is a recipe for a disgruntled client and a disgruntled consultant. Organizations may have technical people (such as the IT staff), but often these are not the individuals putting together the RFPs. Instead, the department managers write them.

There is a technology professional we refer to as the Tech Advisor or Tech Advocate, who offers a genuine alternative, and can fill the gap. Tech advisors do not generally come in as total solutions providers – not because they are lacking in qualifications, but because they choose to do advisory work as their service. What’s more, they can and will come in for a short engagement, before major planning is done and the big decisions are made. Since they typically only represent a small commitment of the organization’s time, money, resources, they can usually be hired without the extensive screening required of the ultimate solutions providers, who represent big budget items.

Tech advisors can assist with early project planning and technology selection, and make them both more efficient and effective. In fact the time and energy high level managers can save because they have the expert support will often more than pay the total cost of bringing an advisor on board in the early stages. Moreover, the tech advisor’s role will ensure that the results of that planning are truly useful for the incoming solutions provider, and that the solution laid out in the plans will be an optimal fit for the organization’s needs, goals and priorities.

A tech advisor is more than just a software expert. The profile includes a number of other characteristics, such as communications skills, project management experience, leadership abilities, enjoyment in working with a range of people, creativity, and a history of solving problems and thereby enabling companies to get out of precarious situations and move forward. Tech advisors generally have at least twenty years of experience in software, with diverse backgrounds spanning a variety of industries. They may work in teams, in order to cover a broader range of technologies.

Copyright © 2012 Patrick D. Russell

Posted in ARCHIVE (Posts prior to 2013), Tech Advisor/Advocate | Comments Off on Software Decisions: The Chicken and Egg Problem

Software Upgrades: A Mixed Bag

Software upgrades often leave computer users disgruntled.  Upgrades are designed to fix bugs and add new features.  However, adding features typically adds complexity, which can obfuscate work that had been comfortable.  A common complaint is that it now takes 5 clicks rather than 2 for an often repeated operation.  A useful feature may have gotten buried in sub-sub-menus.  User interfaces change in ways that some users don’t like or don’t get.  Sometimes features are removed, perhaps because they were deemed of minimal importance, or the development budget didn’t allow them to be ported into the new codebase, or the vendor pulls them out so they can be sold in a separate or more expensive package (which actually happens fairly frequently).

Upgrades can leave users in a quandary in other ways as well.  New bugs may be introduced, possibly hitting some users in exactly the wrong places.  Bloat-ware is another chief complaint.  New versions may be much larger and run slower, bogging down the computer, especially as old codebases are tweaked and appended to rather than being rewritten.  There are times when a new version is less reliable than the old, leading to errors, frustrations, and computer crashes.  Many times new versions only work well if the hardware is upgraded, an added and often unforeseen expense.

Another common lament is that an upgrade or new version has been released prematurely, without having persevered through rigorous quality assurance.  (“Version 1 is really the next beta.”)  This is a disturbing phenomenon that sometimes leaves users wondering what committee is in charge.  A very popular website and portal (one I use frequently) seems to undergo a major facelift every couple of months.  Whenever this happens, very basic (as in front page) features usually get broken.  Within a couple of weeks things are fixed, but very normal regression testing, and even a simple checklist matrix would have flagged those bugs before release.  Many updates smack of pointless churning to most users – despite publicity campaigns to the contrary.

Those in charge of product development sometimes forget a very simple fact.  Unless an update provides a new feature or capability that a particular user wants or likes, the change itself is a negative event for that user.

Thus upgrades may make users feel disabled and abandoned, or cramp their style.  Most serious users of popular desktop operating systems and office products have felt this displeasure at one time or another.  All in all, software upgrades are positive, but there are exceptions.  Many users hold off adopting major releases for at least several months’ worth of patches, and some users have even resorted to paying extra for “downgrades” to get back to a more reliable platform.

Another common snag is that companies buy into an upgrade of a major platform, for its many advantages or because of certain overarching priorities.  However, they sometimes discover that their workhorse software products and systems are no longer compatible.  They may be forced to run multiple platforms just to maintain the old and the new, and they spend a great deal of effort to bridge the different systems.  The pains of transition and the obsolescence of systems on which they have relied can be traumatic.  Workers may feel like they have been wrenched from their familiar environment, possibly forced into precarious job situations given the skill sets they have.

When software is being evaluated, it is important to be careful about the product versions.  Even if a product has a good reputation, one cannot always assume that the later version will be an improvement.  Different vendors have different track records with upgrades, which is at least a starting point for examination.  Sometimes it takes some digging to get useful scuttlebutt.

Associates in companies that do commodity-like installations of a particular package may prove to be a useful resource for both software specialists and managers looking for useful intelligence on software versions.  These companies are typically involved with many deployments, and may know a lot about the features and idiosyncrasies, and the suitability for various projects.

Sometimes ways can be found to integrate and preserve the old software, or to encapsulate it within the new.  Other times it is better to change it out or rebuild it.  These are real world issues, for which there may not be easy answers, but an expert advisor may be able to present alternatives and help prioritize, and in some cases provide a path for incremental transition, preserving budgets and resources in the short and medium term, so change can happen in a measured way.

Copyright © 2011-2012 Patrick D. Russell
Posted in ARCHIVE (Posts prior to 2013), Cautionary Tales, Tech Advisor/Advocate | Comments Off on Software Upgrades: A Mixed Bag

Own Your Own Project

In my earlier blog, entitled The Investment, I talked about the importance of maintaining realistic expectations for the costs and efforts involved in custom software, making the business case, and avoiding the knee-jerk reaction.  Here I will outline a related topic.  One of the ways in which businesspeople can avoid misalignments in their expectations is to take ownership of their own projects.

Businesspeople unfamiliar with a software development cycle will often limit their thinking to the “idea” stage.  Numerous times over the last few years I have been approached by people who have truly excellent, highly innovative ideas.  They may even have fleshed out much of the business side of the equation – prospective customers, business partners, products, services, etc.  However, their concepts about the software application remain very nebulous.

A similar issue arises, particularly in small businesses, where there is a major process that runs inefficiently.  The business owners may be looking for software to improve the situation, but they don’t really know what that would look like.  These people are very vulnerable, and may wind up spending a great deal of money on systems that really don’t help them much.

One of the things we do at Russell Kennedy Partners is to get the clients involved in the actual design of their applications.  We are not asking them to plan the actual software, but rather to identify the major functional and process components of what they want to build.  When they do this exercise, they often find it to be both eye-opening and empowering:  eye-opening, because they come to realize just how many moving parts are involved; and empowering, because they now have the ability to take control of their system, to make their own informed decisions and tradeoffs.

This exercise can take a number of forms.  We sometimes talk with the client about building the major “use cases.”  Use case is a technical term, but you can think of it simply as a typical path a user would take through the application.  For example, a new (unregistered) user comes to your website, and wants to register as a user, or buy a product in your catalogue, or buy a product and register during the purchase.  What are the pages or screens the user will go through, and what is the functionality and flow?  We sometimes encourage clients to build their own wireframes, which are outlines of the components (such as fields and buttons) on a screen or webpage.  Sometimes clients at first come back with only a few of the wireframes that are needed, but then, with a little prodding and encouragement, they step up and deliver a complete set.

When the purpose of the software is to automate a more extensive process, sometimes we start with a little education in terms of what process automation/workflow is and what it can do.  Then we encourage the clients to define and outline the process: the specific steps, the conditions for completion or failure, the times allotted, the escalations, and so on.  At that point they must examine both what they do today, and what would work better for them tomorrow, with the benefit of structure and automation.  I recall one client who initially didn’t understand workflow.  However, as they say, the light bulb went on.  Then, with a little prompting, she used the graphics in Excel (something familiar to her), to diagram out the entire process as she wanted it to work.  This also saved her money (and possibly time as well), because she did what would have otherwise been done by software designers pestering her with lots of questions.

These are the examples of ways you can take ownership of your project.  The downside of doing so is that your expectations may deflate somewhat.  You may realize that you will be paying more and doing more to get where you want to go.  However, the upside is that you will dig into your project and own it.  You will make the major decisions, though good guidance will always be available if you work with a good professional team. You will have a grasp of all the interlocking pieces of the system.  If a bug surfaces later, you will be able to troubleshoot enough to get a quick response from your team, since you will know exactly how things should work.  Finally, if things don’t work out with your team, or the team has only come in as early stage tech advisors, you will be much closer to having a working requirements document that you can bid out to other companies.

Copyright © 2012 Patrick D. Russell

Posted in ARCHIVE (Posts prior to 2013), Software Price/Value | Comments Off on Own Your Own Project

The Investment

Businesspeople often have skewed perceptions about the cost of software.  A few months ago I talked with a group of reasonably tech-savvy people who wanted to build a fully functioning social networking website on a $10,000 budget – and to have it designed, architected, and developed essentially from scratch by a team of highly experienced experts.  Fortunately, these folks were reasonable, and when they began to grasp the scope of what they looking for, they realized it would be necessary to recalibrate their plans and their funding sources.

There are a lot of alternatives out there for companies that want to accomplish a lot on a limited budget.  Various packages and applications, or cloud and Software-as-a-Service plans, may meet 90% of a business’s objectives at a low cost.  Businesses can often adapt to the idiosyncrasies and limitations in these systems because they are cost-effective overall.

However, when a business requires very specialized software, they are looking at custom development of some sort or other.  If you look around you, you are surrounded by a world of custom software.  Most websites involve customization, whether there are weekend warrior coders who keep them up, or large, dedicated development teams.  Don’t forget that all the applications, packages, and subscription services out there are themselves natively custom built.

Some companies look to outsourcing overseas to reduce the costs of software development.  This is a very mixed bag.  Yes, there is tremendous talent overseas that can be hired at much lower costs than can be found in North America and Europe.  This can work well with dedicated management, which small companies cannot provide unless they themselves are software companies.  There are communications and cultural issues, not to mention time zones.  At one point in my career I did telephone interviews for an H1 Visa program to bring software developers to the US, mostly from India.  What I found was that most of the candidates submitted to us by the overseas agencies were very bright, but very inexperienced.  At the time I got a strong sense of caveat emptor.

My suggestion for companies looking into custom work would be to avoid a knee-jerk reaction to the up-front costs.  Recognize that custom software is an investment.  The returns over time can be enormous, many times the cost.  Custom software can enable companies to carve out new territories, and to develop distinctive competitive edges.  I know one small company in a real estate niche that made a substantial investment in custom software in the mid-2000s.  Yes, it was expensive, and beyond what most similar-sized businesses would even consider.  They were very brave and forward-thinking.  However, once the system was finished, they started reaping the rewards immediately, experiencing a huge boost in closed business, a reduction in costly errors, and an efficiency that permitted them to enlarge their operations.  Moreover, they survived the Great Recession and are now thriving, while most of their competitors folded several years ago.  Was their investment worth it?  The business owners certainly think it was.

If you have serious objectives, be realistic.  You probably won’t be happy if you cheap out.  Some companies do well by figuring out ways to “pay as you go” – by bringing in revenues from another area to invest in the software.  This can be creative, and is quite practical if the software can be designed so that it starts yielding benefits even while it is being developed.  For example, one firm was able effectively to bill out a portion of its software costs to a client.  This was a fair and equitable arrangement, because the software availed superior results to the client at standard pricing, while the company used the partially developed software to save time and effort.

Regardless of how you work it out, understand that custom software is an investment, and should be evaluated in terms of its business case.  If your company is simply not able or willing to make such an investment, then limit your explorations to the packages and services that most closely meet your needs.  Look for flexibility based on sophisticated configuration capabilities, which you can do yourself without customization.

Copyright © 2012 Patrick D. Russell
Posted in ARCHIVE (Posts prior to 2013), Software Price/Value | Comments Off on The Investment

Software Subscription Prices

In the earlier blog entry entitled, Monetization:  How is it affecting your business?, we noted that many organizations are choosing to forgo internal hosting for subscription services.  This can be tremendously cost-effective for many businesses and organizations.  However, a hardnosed examination is advisable, instead of taking it for granted that the value proposition is straightforward.  We have seen a few companies make commitments that land them in a much higher cost bracket than they anticipated – putting them into an awkward position of deciding whether to continue to pay, or to scramble for an alternative.  Often the price structure features a “loss leader” that can quickly become very expensive.

There are real business decisions here.  The value proposition for these services is real, but there are times and places for various options.  Sometimes an expensive option may be reasonable in terms of total cost of ownership during a period of organizational development, but at some point the cost-benefit analysis may point to a different approach.  Also, a subscription offering may be surprisingly affordable for certain features and services, but more expensive for others.  Different offerings, even from the same company, may not be equal bargains, so it is smart to be selective.

Price differences can sometimes be almost hard to believe. For example, Microsoft has been promoting its cloud office/SharePoint products, BPOS, and the latest incarnation, Office 365.  Recently they announced a 92% reduction in the monthly fees for data storage, from $2.50 per gigabyte to 20 cents in the enterprise versions of Office 365! Purportedly they did so to speed acceptance of the platform.  While this is great news for the customer, it illustrates the wide range of cost and value out there.  Also, it is often not that easy to find what the actual pricing is going to be, especially when it is high.

Subscription services are a fact of life in today’s world, but, caveat emptor, the shrewd customer will concede the value of due diligence and careful inspection of the current terms as well as the price, terms, and options for upscaling.  Again, no one size fits all.  What works for one organization may not work for another, and it is worthwhile to review and amend the arrangements as internal conditions, price structures, and options change.

Copyright © 2011-2012 Patrick D. Russell
Posted in ARCHIVE (Posts prior to 2013), Software Price/Value | Comments Off on Software Subscription Prices

Monetization: How is it affecting your business?

Have you examined your cell phone bill recently?  There have been a number of stories on the news recently about overcharging.  Some companies have figured out how to charge add-on fees in the most deceptive ways.  Some of these charges may be illegal, but the practice continues because some company officials believe that the revenue is greater than the consequences.

We have a cautionary tale here.  Organizations sometimes unwittingly get involved with a carefully planned monetization, where they end up spending more than planned.  If the choice is made for the “too good to be true” it probably is.  Sometimes we choose the low cost solution and it really is a bargain.  Other times we miss the hidden costs, in some cases implanted by ruthless business administrators and accountants.

The ability to plan a budget well depends on working with business associates who provide a fair value for services with a give-and-take flow.  The old ideas of the baker’s dozen and lagniappe are actually prevalent among consultants and vendors who go out of their way to make sure the client is satisfied.  This is a two-way street, calling for honesty and fairness on the organization’s part.  Without that, we simply have one party trying to take advantage of the other.

Nevertheless, monetization is a fact in the corporate world, but if organizations understand what is going on they can make better choices.  An example came up at a conference I attended recently.  A speaker described one of the ways the successful software companies define their product lines.  Companies often have the Silver, the Gold, and the Platinum versions of products (or something similar).  The Silver version is the entry level that draws many first time customers, but corporate analysts identify one feature that most users will soon want, and that feature is put into the Gold version (along with many other features).  There is another feature like that in the Platinum designed to quickly up-sell the Gold users.  This practice isn’t much different from going to the store to buy a ball point pen, and having to buy a 10-pack.

The licensing and fee structures for subscription-based software are another area that calls for scrutiny.  Will there be a feature set, or a higher service level needed sooner rather than later, that the organization is not aware of, but which will come with sticker shock?  Frankly, vendors, by and large, don’t see it in their best interests to point out these issues up front.  They are not obligated to do so, as long as their disclosures comply with whatever rules are in effect.

Recently while investigating a cloud product a client was considering, I was surprised at how difficult it was to find the actual price structure.  The entry level prices were featured on the company’s website, but it was very difficult to find the costs the client would be facing given its projected needs.  I eventually found the detailed price schedule in a technical article that was not on the company’s site.

Copyright © 2011-2012 Patrick D. Russell
Posted in ARCHIVE (Posts prior to 2013), Software Price/Value | Comments Off on Monetization: How is it affecting your business?