It's a good day for some when a bank's ATMs start dishing out far more money than the customer requested, and brightens up many a commuter's journey to work as they read all about it in their morning paper. But clearly it's not a great day for the bank, and it's a downright stinker, career-wise, for those who designed and tested, or failed to test, or inadequately tested, the ATM software involved. And for much applications software – for air traffic control, or government-held personal databases, or production lines – there is emphatically no lighter side to the catastrophes that can arise when the testing process is shown to have been wanting.
Small wonder, then, that growing dependence on IT in all parts of the economy has let to explosive growth in the software testing business, which, despite the cost, is seen as a relatively inexpensive piece of insurance compared with the costs that can arise when it is skimped. The growing importance of legislation and regulation has strengthened the focus on testing for the many industries that face shut-down if they fail the compliance tests – from drug companies and banks to airlines and utilities.
So why is it, given the huge expansion in testing that has occurred, that we continue to see spectacular software failures that can be traced back to inadequacies in the testing process? Analysis of such failures reveals that in many cases the issues are down to a wrong-headed testing culture – one which views testing as an end-of-line activity that takes place as the last step before new software goes into live production. What might be called the Y-Z as opposed to an A-Z policy.
If you look at the reasons why new software is developed, you can see why the Y-Z approach fails. An organisation rarely sanctions a significant software development just for fun, or to keep its IT people happy. It is generally a response to a real business need – a new market opportunity, a drive to cut costs, a takeover or merger, new regulation (or deregulation), a new product launch, or the perceived benefits of a large-scale IT transformation. In all cases plans are made and a business case put together. Invariably, new IT support is required to support the new venture, and a statement of requirements for the new system is produced as a preliminary to developing the applications to meet those requirements.
In theory it's a logical progression, but in practice things can easily go wrong at every stage. There are many examples of disastrous breakdowns in communications when the statement of requirements is drawn up and signed off by the business sponsors, even when the business and the IT people concerned are all highly competent professionals. Why so? Simply because the two groups often 'speak different languages', with their own jargon, assumptions, modes of thought and, often, blissful ignorance of the other side's ideas and assumptions. Further problems can occur at the next stage, when the senior IT people heading up the project spell out the requirements to the teams actually developing the new applications. The former group often lack the latter's knowledge of the arcana of Java, C++, SAP, Oracle or whatever, and further communications lapses can all too easily happen.
Finally, at the testing stage, test scenarios are often devised by IT specialists who lack a realistic appreciation of the real-life business situations likely to arise. The net result can be a new application that runs just fine for a while, but is highly prone to stumbling, and breaking its leg - or its neck, when the unforeseen happens.
The good news is that it doesn't have to be like that. As testing has expanded, so has the body of expertise around the whole testing process expanded in parallel. That is true of many large multinationals, of some public sector agencies, and of IT services companies like my own. Realisation has grown that to be truly effective, testing has to be carried out on an A-Z not a Y-Z basis.
Techniques and methodologies have been developed that can be successfully applied to every stage of the software development process, not just the final pre-production phase. For instance it is now possible – and indeed highly desirable – to apply such techniques to review the IT department's statement of requirements against the needs of the business sponsors, and to do so in an objective way that is based on huge amounts of experience of the pitfalls involved. It is a safety net that can potentially save enormous amounts of budget being spent travelling in the wrong direction. There are also now proven techniques to tell you how much testing to do – how to test the business-critical elements of new applications without spending a fortune on unnecessary testing of the relatively trivial.
Also good news is the recent trend to centralise testing expertise within an organisation. Rather than having two or three testing specialists tacked on to each individual project team, they are increasingly being grouped into a corporate testing 'centre of excellence', with resulting cross-fertilisation of ideas, standardisation of processes and steady growth in knowledge and skills.
There is also the increasingly popular option of outsourcing one's testing processes to a third-party supplier, with the benefits of independence and of dealing with testing professionals who are staking their reputation on doing a good job. Another motivation is that of accommodating peaks and troughs in the testing workload. Testing is also seen by some as a low-risk way to try out outsourcing. But should you entrust testing to your existing third-party IT/outsourcing partner or choose a fully independent tester? Opinions are divided, but many IT companies now offer a 'third choice' by forming arms-length, independent subsidiaries dedicated to testing and this choice can offer an attractive combination of independence plus security.
But for those CIOs who are still complacent or indifferent about testing, and who haven't yet grasped the need to view it as an A-Z process, my message is – wake up, before disaster strikes. And don't forget the many studies that have shown the cost of rectifying errors post-production typically ranges from 10 times to 250 times the cost of rectifying them pre-production.