Given the amount of money that companies spend on enterprise software, it might seem that by now they must be pretty good at the processes around selecting it. Yet time and again I come across instances where organisations seem to employ techniques little more careful than throwing darts at a list of vendors.
A recent conversation regarding a (six figure) enterprise software application illustrates this well. The friend I had spoken to had just spent a day travelling to see a particular application in action at a customer site. Fair enough: always sensible to check out customer references. Why, then, did my friend seem so irritated about their trip: "total waste of time; we have already chosen the software package, and it is not this one; in fact the contract for the other package was signed yesterday." Er, why then, I asked, did they waste a day of their time (not to mention the time of the vendor and the customer reference)? "We have to be able to show to finance that we only picked the chosen package after a competitive evaluation". So how was the winner chosen? "Oh, we had a great demo from another vendor; I know are doing this the wrong way around, but we need to get something in quickly".
This was hopefully unrepresentative of the software procurement practices of this organisation, but it would be a long way from unique. Time and again I encounter cases where a software product has been purchased after a cursory examination of alternatives, and with little or no structure to the evaluation of these alternatives, or even as to whether the selected solution really meets the requirements.
Why do companies so often make a hash of selecting and purchasing software? Usually the people who are normally ultimately in charge of a purchase decision, the end-users with the cheque book, purchase software rarely, and are frequently in a hurry to solve a pressing problem. This means there is "no time" to do that pesky evaluation of alternatives and drawing up of decision criteria. No time to do that perhaps, but there will plenty of time to rue that decision if it turns out that the software doesn't do the job.
Clearly the amount of effort that should be put into evaluation needs to be proportionate to the scale of the decision. A small purchase of a utility for a specific one-off project requires less scrutiny than a major application that will be deployed globally.
However, even for fairly small [purchases (which let's face it, in enterprise software are still usually five figure sums) it is worth following a few simple steps. Firstly, make sure the problem to be tackled is written down and agreed by the key stakeholders that will be using the software. It is surprising how easy it is for misunderstandings to develop between people about what is really necessary, especially if these people have different backgrounds or are from different parts of the organisation. Next, see if there is something already deployed in the organisation that could do the job. A quick email and a few phone calls may well turn up something deployed in another department that actually meets the need, and may even already be paid for (if an enterprise license is in place).
Assuming that the requirement definitely cannot be met by either a software package already deployed, and assuming that it is sufficiently complex that it cannot trivially be custom developed, then you need to look for an off-the shelf package. A short amount of time investigating will pay dividends. Is there an in-house technology team that can help here? If not, perhaps the company already has a subscription to an analyst firm specialising in the area, who could be called up to advise? These days of course, a great deal of material is available on the internet.
Assuming that this trawl yields a number of possible solutions, you should decide how to reduce this list to a manageable size. Engaging with software salesmen is not fun at the best of times, and you do not want a dozen of them breathing down your neck at the same time. Take your requirements document and review it, thinking of criteria that would actually reduce the list. For example, you may want to only consider solutions already deployed in the same industry as yours, or from companies with a local support team, or that are less than a certain price, or run on a platform that is standard in your organisation. If you can, use these criteria to reduce the list to a manageable size, say three or four alternatives. This is where having someone knowledgeable about the market in question can help (a consultant or analyst).
Next decide how you are actually going to pick a winner, before you get in touch with the vendors. Ideally devise some sort of tests based on your real-life situation e.g. take a sample of your data that has to be dealt with. The more money you are spending, the more effort can be justifiably put into specifying these success criteria and tests. Try to ensure that the evaluation will consider the complexity, data volumes and scale that your real system will need to deal with; do not just assume that the demo the vendor builds will magically scale up, or deal with some peculiarity of your requirement. Then, and only then, send the requirements documents to the short-listed vendors and start the evaluation process. The more carefully structured the process, the smoother the evaluation will go. More importantly, you will have confidence that the solution you have chosen will actually do the job.