“The More It Changes, The More is Stays the Same” (Jean Baptiste Karr, 1849)
Many things have changed over my time working in and around software, but one thing has remained constant: estimating software projects is tricky. Of course poor estimating is not confined to the world of software, as can be seen in the sometimes spectacular overruns of civil engineering projects, but software engineering has a justifiably poor reputation for coming in on time and budget.
An October 2012 McKinsey study of over 5,000 projects found that IT projects over $15million in size come in, on average, a dismal 45 per cent over budget and seven per cent later than planned, whilst delivering 56 per cent less value than expected.
Software is tough to estimate because it is a slippery thing: people have a pretty good idea when they build a bridge or a house what to expect. Mankind has been building bridges and houses an awful long time, so there is a body of historical data on how long it should take. Custom software is very flexible, and with that flexibility comes the potential for scope to change and for functionality to morph in unexpected ways.
There is no shortage of material written on the subject, but I observe that there is a tendency in software to reject any accumulated wisdom that is more than a few years old, in the belief that it can no longer be relevant in software’s fast-changing world. After all, what is the point at looking at techniques from the 1980s in the world of agile development and open-source code? Actually I believe that there is a lot to be gained, and that the issues confronting software developers are rarely as unique as they like to believe.
There are quite well-established techniques to estimate the time and effort needed to produce a certain amount of software functionality or code. As far back as 1979 an IBM engineer called Allan Albrecht invented the function point, a non-technology specific way of measuring software size. Instead of counting lines of code, which have many arbitrary elements (differing layout styles, the productivity differences of lower v higher level languages etc) the function point was a more abstract notion. A set of rules were devised that counted inputs, outputs, interfaces, files and inquiries. It was then possible to build up historical knowledge of how long it took to build, say, a five thousand function point application in a given programming language. Unfortunately the rules used in counting are more arbitrary than ideal, and require staff to be trained in how to count consistently.
However, whether you are counting function points or lines of code, building up historical data on how long your organisation takes to deliver software makes sense, as there are many factors that make comparison difficult. Some software has to be written to stricter quality standards than others (you do not want to be regularly rebooting your air traffic control system) and there are many factors that affect productivity. What has been well established is that scope changes bedevil software projects, and that the longer the project the worse this becomes. So called “agile” development is one of many responses to this, trying to break a large software problem down into smaller, more tractable pieces, and regularly delivering software in incremental, small releases. In actual fact this approach is far from new, Tom Gilb having written about it in the 1970s, but it has gained greater traction in recent years. Agile development certainly has its merits,, but the extra software releases required can actually add to the elapsed time needed for a given project.
Barry Boehm wrote about a software delivery equation as far back in 1981, showing a very interesting link between the number of lines of the code, the time needed to complete and the number of programmer-years required. In particular this showed that in order to bring in a software project artificially early requires vast additional effort, as adding more and more programmers slows down the productive ones and causes additional communication issues. The amount of extra effort needed is so large as to be counter-intuitive, but when I had the chance to observe two otherwise almost identical projects at Shell in the 1990s, one of which had a constrained end-date, the real life extra effort needed was quite accurately predicted by the formula, much to the incredulity of the project manager.
All these estimating theories agree that the largest drain on productivity is project size, with very large software projects being much less productive than smaller ones. This is quite intuitive, and so it is worth making every effort to break projects up into manageable chunks, but this is not always practical.
For all the advances in software, project managers would do well to look back at some of these older books and their estimating techniques. Software development issues change less over the years than is commonly assumed.