I was talking at a cyber security seminar a few weeks ago when another speaker announced they were going to reveal one of the biggest lessons they had learned about security over the past few years.
I sat up. What was this exciting fresh insight going to be?
The big lesson was this: the speaker announced it was important to think about security up front. Awkward; one of those tumbleweed moments. Presumably before this "discovery" they'd been busy banging out code and cobbling together systems without much thought for security engineering.
I wasn't just baffled, but annoyed. This is 2016 after all, not the early 1990s. No wonder computer security remains so poor if "leaders" working in tech have such a poor grasp of the fundamentals. Unfortunately, I've encountered a similar lack of knowledge lately when doing technical assurance reviews of major programmes. Simple questions such as "How many of your developers are trained in writing secure code?" produce blank looks or, more often, panic, and the hurried limp excuse "Oh, we're just sending some of our developers on a security essentials 101."
Really? Halfway through the programme? Whatever happened to good old security by design and the idea of trustworthy software? No wonder we even have hackable "smart" lightbulbs making the news. There seems to be a whole cadre of developers who skipped security training. How is that even possible?
Then there are the broadcast emails I've seen bouncing around organisations along the lines of "Someone is running an unidentified process in our development environment. Can you please stop it or let us know what it is?" Seriously? You're broadcasting to the world that you're running an insecure and untrusted development environment?
Another speaker exhumed the old canard that security can be reduced to numbers and therefore quantified easily. But effective security is a matter of risk management, and risk is a perception - not a number. The idea of reducing dynamic, complex and evolving environments to a simplified static model doesn't survive contact with systems, people, relationships and processes for long: security is complex and emergent. Humans, usually the weakest factor in terms of creating secure systems, rarely behave predictably or rationally – that's actually part of the fun, and challenge, of working with tech.
Saying you can easily quantify the security of complex systems is as barking as claiming you can quantify human behaviour. I'd like to see the equation for that. It's time we stopped people "discovering" what has long been known, or thinking they can manage security by reducing it to a set of numbers in boxes. We need a collective security engineering reboot right across tech of the kind that Bill Gates rightly imposed on Microsoft back in 2002.
It's time to do a much better job for our users - frontline staff, citizens, and businesses alike - who pay the price when we get it wrong. So let's ensure everyone working in technology, from those developing critical national infrastructure to those innovating the latest gadgets for the IoT, has a solid grounding in security.