Last month I argued that a good starting point for the CIO grappling with the transformational changes wrought by virtualisation lay in considering its impacts on each of three landscapes:
- the technology delivery landscape
- the vendor competitive & contractual landscape
- the enterprise operational, systems & contractual landscape.
A key fourth landscape certainly in need of transformation is testing.
Testing has a very techie reputation in our industry — testing professionals risk being conceptualised as Mr No. A project team sweats to write complex code against a client’s specifications, and back it comes from the testing folk, rejected.
In a recent paper I co-authored with Geoff Thompson of the specialist testing consultants Experimentus, we noted that:
“The biggest impact on the costs and timescales of any IT project are the stops and rework required to fix defects found during testing in the later stages of the project”.
In the emerging world of the virtual, the creation and delivery of software and systems demonstrably requires a reformed testing discipline.
For a start, in a turbulently changing world, requirements will be more fluid. In one failed government contract I reviewed recently for the board of the contracted supplier some 3000 detailed specifications had been written in stone three years previously.
Little wonder the exercise came to an expensively sticky end for both parties. In today’s world, requirements flex, and rightly so.
The IT profession is responding with the development of new agile methodologies. The testing profession is making its integral contribution.
But even then, what happens when the new software is launched into operational use, and what happens if later in its lifecycle the underlying computing and network services that support it are themselves transformed?
And what if the new systems structure is actually an assembly of existing software objects and only a modicum of new code — think apps developed on Force.com.
And, as services integration replaces systems integration and the new software is delivered as a service, how well will it interface with other software also delivered as services?
Geoff and I went back to basics and asked: “What is the essential purpose of testing?”
The business CEO answered: “When I invest significant sums of money in new software capabilities, I want to be absolutely confident that my intended outcomes will be delivered, on time and to budget — that for me is the sine qua non.”
We concluded that testing was about confidence in outcomes.
The challenge is to create an approach to testing that encompasses the full lifecycle of new software and systems, from requirements development, through code creation including with objects and systems assembly, to the intended operational environment, to full compatibility with a galaxy of standards, and on to robustness to future transformations of that operational environment.
This is about quality control designed specifically for the new world of the virtual, about testing whose business is the assurance of that quality.
I must declare an interest. I was recently elected to chair the independent Test Maturity Model integration (TMMi) Foundation. Its purpose is to promote a structured means of measuring the maturity levels that scope the effectiveness of the testing process.
It complements the well established CMMI process quality framework.
Why did I agree to take on this role? I have no background in testing. But in the world of visual arts in which I also work I recognise two key roles vital to enabling the wider creative process to deliver.
One is the curator, the other is the critic. Their contributions shine a light on how our industry tackles the challenges it faces.
The curator builds deep experience in the works and the workings of the artist, nurtures living talent, assembles and presents current and established work in contexts that aid interpretation and the development of insights vital to the ongoing delivery of the creative process.
In the ICT world the curator is the programme manager, the project director who builds on deep experience to assemble a relevant software team, nurturing and motivating it, and directing ongoing delivery processes towards successful outcomes. The curator as CIO.
In contrast the critic is the force of informed challenge, the source of rigorous analysis that draws on experience and proven conceptual frameworks to create value-building insights and critiques of the work of the artist and the curator. The critic as quality assurance.
In our industry we have a strong tradition of curators but little tradition as yet of the serious critic (analysts, yes, in abundance, but critics, no). The rigorous TMMi method potentially underwrites the work of the practiced critics we will need to guide the development of an effective new testing, focused on its real purpose: the delivery of confidence in outcomes.
Richard Sykes was vice president of IT at ICI in the 1990s and is now a consultant