The software industry has been around long enough now that one would hope that lessons would be learned from past mistakes. Sadly, the old saying by author George Santayana that “Those who cannot remember the past are condemned to repeat it” is as relevant today as it was in 1905.

A key lesson in software projects is that size matters, and the bigger the project, the less productive and riskier it is. Yet the other day I came across a struggling project at a well-known US firm. Rather than break the problem down into bite-sized pieces, they had decided to fix their shared data (often called ‘master data’) once and for all, and to tackle this enterprise-wide, across all locations, and covering all key shared data such as suppliers, products, customers, locations and so on. This was all to be done in a ‘big bang’ approach.

As the company in question is a multi-billion dollar revenue organisation that operates in many countries, they got more of a big crunch than a big bang. Frantic de-scoping resulted in the prospect of a vastly reduced project going sort of live, over a year late, and with most of the systems that the project was supposed to replace still running in parallel.

Why do large projects often fare so poorly? There is some decades-old theoretical work on software projects, but it is often overlooked. A common thread running through these studies is that the larger a project becomes, the greater the resources that have to be spent in simply communicating the objectives among staff and fixing misunderstandings rather than getting on with the task in hand.

With a project of five people all in the same office, a change of specification can be made and communicated quickly, but with a project of 100 people the same change needs to be carefully documented and disseminated among sub-teams, with plenty of scope for Chinese whispers developing along the way.

The result is that large software projects are dramatically less productive that smaller ones, irrespective of the technology used or the industry. Worse, if a struggling project has a deadline it looks like it may miss, it is exceedingly hard to fix this by bringing more resources to bear. As more willing hands join the team, productive project members have to stop work to bring newcomers up to speed, and the larger numbers make communication issues even worse.

There is a formula which describes the relationship between the amount of resources, the optimum time needed for a project of a certain size and what is required to speed this up. The amount of extra resources to actually make an impact is so vast that initially I didn’t believe the figures. That is until one day I had a chance to see this effect in action.

While I was still at Shell there were two parts of the business that ended up essentially doing the same project. They were independent projects, but to all intents and purposes identical in terms of scope. Both projects were estimated to take 13 months, but a decision was taken to reduce the length of one of the projects to 12 months. Money was not a major factor, and more resource was piled in to bring the end date forward. Remarkably, the compressed project took 50 per cent more effort to bring in than the one which ran its natural course, something that caused general bewilderment at the time but which actually fits in quite well with the formula above. The mathematics may be rather dry reading, but the old phrase “nine women cannot have a baby in one month” sums it up quite well.

Sadly, a lot of this project management theory lies neglected today in an industry in which youth and energy can be valued more highly than experience. In the case of the big master data project I mentioned earlier, a likelier route to success would have been to break the problem into smaller pieces, say by starting in just one business area or data domain, building out a solution for this subset, trying it and fixing any issues that came up. Then the company could steadily roll out this proven solution into other business units and other data domains, each miniature project being relatively small and hence, according to theory, more productive. The overall project would perhaps take longer, but even that seems uncertain. At least some tangible benefit would have come to part of the business from this incremental approach, rather than the whole company sharing the cost of the monolithic project.

The lesson is that very large IT projects are inherently risky and unproductive. If at all possible, split them up into smaller elements with an incremental delivery process. True, not all projects are open to this bite-sized approach, and there will be times when an external factor means that a deadline really is fixed rather than being set arbitrarily by senior management. However, all too often we are simply not learning from our past mistakes.

About the author

Andy Hayler is founder of research company The Information Difference. Previously he founded data management firm Kalido after commercialising an in-house project at Shell

Related articles:

Andy Hayler: The innovation game