Far be it from us to suggest that readers of this title should use anybody else’s misfortune to their own advantage, but the continuing IT woes of RBS do seem to present every CIO with added bargaining data for their next round of capital spending negotiations.

An outage of a few hours on the busiest shopping day before Christmas (dubbed Cyber Monday) is estimated to have affected some 750,000 customers of RBS, NatWest and the Ulster Bank. The outage left them unable to buy online or even to their groceries at supermarket check-outs.

The following week hackers, no doubt sniffing at an open door, mounted a denial of service attack on the RBS website and so caused a second, shorter outage. It seems inevitable that this won’t be the last of the problems RBS experiences with its IT - watch this space.

Of course the problems have been ongoing for some time. Last summer a software upgrade caused an outage which resulted in 1.3 million customer complaints and a compensation bill for the bank of around of £175 million. This in due course also resulted in an FCA investigation, which is still to report back. The bank has already stated plans to spend a further £450 million on new IT systems to address these earlier issues. The latest failure, layered on top of all this history, may cost the bank much more in lost business if customers lose faith in the group’s ability to serve their banking needs.

RBS CEO Ross McEwan, Stephen Hester’s recently appointed successor, was quick to apologise after the most recent failure, putting out a statement the following day: "Yesterday was a busy shopping day and far too many of our customers were let down, unable to make purchases and withdraw cash. For decades, RBS failed to invest properly in its systems. We need to put our customers' needs at the centre of all we do. It will take time, but we are investing heavily in building IT systems our customers can rely on."

McEwan’s apology hints at a broader problem, which lies in the outdated and much modified older systems RBS (and other banks) still rely for some core IT processes. These are systems built in a pre-mobile, web-enabled end-user managed banking world. A time when closed, physically secure systems could be designed to manage the high speed transaction processing required.

Since then the demand has increased and the nature of use has changed. Access has been opened up with new add-ons, increased system complexity and making security harder than ever. And the people who built and maintained these systems have long since moved on or retired.

The choice RBS and others face is between investing more to try and improve their out of date core systems; investing in new systems which continue to feed off the same core systems; or simply starting all over again. The last option is generally the least attractive, but probably the only realistic one for RBS.

Of course these are the very issues almost every CIO (unless you are working with a start-up) recognises. Those CIOs also lucky enough to have their own place on the board will also recognise the conflict at board level between carrying on with what you are already doing, administering the necessary fixes and extensions to make the essential new stuff happen, and accepting that a solution might need an expensively re-engineered change programme.

This board conflict isn’t simply about intransigence – though that may exist too. Making massive changes to a complex IT-led solution involves its own raft of risks. As we’ve seen from the RBS example, customers won’t tolerate even relatively short downtimes without massive repercussions. The amount of risk planning and investment involved in such wholesale change is therefore huge for companies like RBS.

But like it or not, it’s clear that some banking systems are getting to the end of their lives, and that such massive re-engineering is now on the table. We’ll learn early in 2014 exactly what RBS plans to do about its very public technology woes. We’re guessing it won’t be quick and it won’t be cheap, but it needs doing all the same.

For the rest of us the whole process we’ve witnessed is highly educational. Witnessing failure, censure, apology, reimbursement and investment, we have a perfect lesson plan that we can apply to our own IT infrastructure and spell out in our own boardrooms. No CEO wants to be in Ross McEwan’s position, prostrating themselves before a mob of angry customers after all.

It is in the CIO’s gift to provide CEOs with an alternative scenario before it gets this bad. Most likely it will be a problem requiring rather more investment in IT capital spending than the board was anticipating...