A few days ago I had lunch with Amichai Shulman (CTO and co-founder, Imperva) – a world class security expert and (no less important) a very nice guy. As is typical in our lunch meetings, our conversation moved towards security. This time, it was on “why software security is so hard”. After all, software security is a feat of engineering, just like making sure that brick-and-mortar bridges don’t collapse, and thus it may be subjected to clear criteria and requirements just like brick-and-mortar engineering projects.
Of course, there are arguments that explain why software security is different than making sure bridges don’t collapse. A difference in complexity (of the requirements and the systems themselves), the number of components involved, modeling the behavior of components (from atoms to cement and solid construction elements), etc. can perhaps explain the difference. But somehow we were not altogether comfortable with these arguments. For example, using the bridge analogy, why should we consider the requirement for secure banking software where “actions in the account must be initiated by the owner of the account” to be inherently more complex than “the bridge should sustain vehicles weighing up to 80,000 pounds”?
The discussion continued over email and at one point we reached an eureka moment.
Bridge versus Software Security Analogy
In the bridge analogy, the bridge must withstand probable physical “challenges”. For example, an accident can happen in which a truck loaded with 1,000 gallons of acid will burst and spill the acid on the bridge. It is also possible to have a 7 Richter strong earthquake in the bridge’s area over the next 100 years. And of course, there is the occasional hurricane. These are the extreme-end engineering challenges the bridge designer faces, and he/she can assume that the bridge can either withstand each such challenge completely unharmed, or that some mending/maintenance will take place right after the disaster to restore the bridge to a perfect condition.
But in the software security world, the situation is different. One has to assume that if something is even remotely possible, then an attacker will come along and somehow trigger it. And of course, if there are 10 such totally unlikely (but physically possible) scenarios, an attacker can cause them all to happen in whatever sequence she likes (or simultaneously, if she wishes).
So back to the bridge analogy – it’s like planning the bridge to withstand an acid spill, an earthquake and a hurricane at the same time. And this is where things are getting very difficult to engineer. Normally the bridge is designed to withstand each such “attack” separately (allowing maintenance teams to bring the bridge back to a perfect condition after each “attack”), as after all, each such “attack” is very rare. But when the “attacks” happen in a rapid succession (which is extremely unlikely), the bridge will collapse: when the acid spills, it creates cracks in the concrete and weakens the steel skeleton at some spots; the earthquake right after will further expand these cracks and perhaps tear apart some pieces of the bridge; finally, the hurricane will take down the already feeble bridge.
Another example is a scenario wherein a hole forms up in the middle of the bridge, then the hole gets filled with dynamite, and finally a fuse is blown in it. This of course is extremely unlikely to happen naturally (thus we don’t expect bridges to withstand such scenario), but obviously when malicious intent is involved, this is no longer improbable (consider a saboteur working full day in the middle of the bridge with a jackhammer to create a hole in it, then fills it with dynamite and lights the fuse). In fact –when we say that a bridge is well-built, we usually mean that it withstands natural/unintentional disasters, and we silently dismiss deliberate attacks.
Intent Changes The Odds
In other words – the attacker intent changes the odds. Any event that has a very tiny probability in a normal physical world can be treated as having an unknown probability, which means that it’s no longer a-priori unlikely to happen.
And to answer our original question: software security is so much harder than building good bridges because in software security, our engineering intuition of what is likely or unlikely to happen can mislead us – as the attacker can orchestrate events that are otherwise very unlikely to happen naturally.
Since software security requires a different mode of thinking, it comes as no surprise that an entire industry (the information security industry) has been formed to address the security gap, leaning upon people (security professionals) who are able to employ such different mode of thinking, and analyze, assess and mitigate security risks.
I think this has an interesting implication that sets apart the software security industry from the software industry at large: in my opinion, progress in software security is made much more by introducing new knowledge (by security researchers – “thinkers”) than by new technologies (invented by security people/engineers – “tinkers”). I see a lot of new knowledge poured into existing technologies, but only rarely do I bump into really novel security technologies.
In short – information security is a game of knowledge (in the security professional sense – not necessarily threat intelligence) and expertise. The players with the most comprehensive and deep knowledge and understanding of the problem space are likely to provide the best solution (not necessarily involving novel technologies).