Selecting an enterprise software package is rather like buying a puppy: it is not just for Christmas.

OK, it may not be for life, but after the initial fun of installing your new software, playing with it and getting it to obey your orders, you have plenty of years of looking after it until that trip to the vets.

Once implemented, software packages may be in place for 10 or 20 years, but in these busy times the focus is usually on the here and now: will it solve my immediate problem?

Given the actual longevity of systems, it is worth spending some effort in picking the right package.

Even if your initial software purchase is half a million pounds, you will undoubtedly spend several times this amount in internal effort and in consulting costs on the first project.

In the case of master data management, Information Difference surveys show that a project using software that cost x to buy will spend an average of 4x in effort actually implementing it.

Then you have maintenance costs, both of the software and the people side.

If we conservatively estimate 15 per cent support costs, then that initial piece of software costing ‘just’ half a million is actually going to set you back over £6m over a 10-year lifespan.

A couple of recent consulting engagements have shown me that many corporations have a lot to learn about the vendor selection process.

In both cases a systems integrator had been involved in helping draw up a shortlist, yet in both cases the shortlist was bizarre: vendors with entirely different strengths and track records in the relevant industries had been shortlisted, and some obvious candidates ignored.

In one case a vendor said to me privately: “Of the three shortlisted products, at least two of us are in the wrong place”.

Vendors can get put on a shortlist for a range of reasons: they happen to be an incumbent vendor at the customer in another area, the research into the market may have been sketchy, or a vendor may have an undisclosed relationship with the supposedly neutral systems integrator running the process.

It is normal for a systems integration practice to have preferences that may not be apparent, from simple familiarity of the project team with a particular technology through to an undeclared referral fee payable to the vendor.

Best practice in vendor selection involves a rigorous, structured evaluation process based on a careful analysis of user requirements.

Broadly speaking these can be grouped into commercial, technical and functional criteria.

Technical criteria may include the platform that the packages run on: the client may have a preferred database, web browser or operating system, for example.

Commercial criteria is not just about the price: the ability of the vendor to support remote locations or time zones, the complexity or otherwise of its billing processes, its track record in your industry and the quality of its customer references may all be important.

Functional criteria are what you actually need the product to do in your project (while bearing in mind the needs of likely follow-on projects).

It is very important that clients discuss this internally: it is common for such lists of features to be drawn up by someone in the IT department or by a consultant, and in such cases it is easy for the real business needs to be misunderstood or overlooked.

A feature that an IT person regards as useful or impressive may have little or no relation to actual business requirements.

There will also be a list of functions that people think might be handy, but it is important to prioritise these.

If you have a list of a dozen of functions that are all deemed vital, distribute 100 weighting points among these, forcing you to make decisions as to which functions are really crucial and which can be compromised on if necessary.

Avoid drawing up endless wish-lists of every conceivable thing that you hope the product might do, and aim for a manageable list of functions that can actually be tested against your own data in your environment.

In one recent engagement I was astonished to hear that the client had planned to send out a request for proposal and then simply pick the software vendor from that, with no thought to actually testing the software out.

The client seemed surprised that software might not actually do what the vendor claimed during the sales process.

Having a well-structured evaluation process including software testing adds some cost, but it is a rounding error compared to what you are going to spend on the lifetime cost of the software. Don’t end up being sold a pup.