No. Not another security article. Please, haven’t we all had enough? We’re afraid already. We are sick to death of the doomsday warnings about the number of glaring security holes in just about everything we touch and the inadequacy of our own security measures. We don’t want to be lectured again about how careless we’ve been. We don’t need to be pitched yet another snake-oil, safe-as-a-baby’s-bottom, can’t-survive-the-apocalypse-without-it, magic-button security solution – that costs only slightly more than the thing it’s protecting and probably makes it so hard to use that we’ll end up just giving up on the whole thing.
As an editor, I am pitched security stories constantly. It seems that new companies are starting up every single day with a mission to make money from our fear and paranoia. Yes, we could become the Henny Penny Technology Press, running around yelling about how the sky is falling and we’re all doomed. And yes, there are real security threats out there that require all of us – especially engineers – to take reasonable precautions. But our preoccupation with keeping the bad guys at bay may have gotten just a little out of hand, and it’s giving rise to an industry that’s possibly even less scrupulous than those it purports to defend us against.
Our view of security is – choose a metaphor – can’t see the forest for the trees, elephant and the blind men, too many cooks… Really, it’s just an ad-hoc mess. In engineering, we go around designing locks on just about everything we build. We want to be prudent and responsible, and often we’re not even sure exactly what we’re protecting or whom we are protecting it against. And, in the world of security, the cast of characters would make the credits run for several hours after the movie ends.
Let’s take a look at a hypothetical design using something like one of the new SoC FPGAs. These devices combine programmable hardware (FPGA fabric) with conventional processing subsystems all on one chip. They could be used to implement an enormous variety of systems, but let’s pretend for a moment that we’re designing some kind of high-value communications hub for home or industry. It will have custom hardware in the FPGA fabric, embedded software (including operating systems) running on the applications processors, wired and wireless communications, third-party applications, and so forth. In other words, it will be a pretty typical modern electronic system.
So, who needs protecting? Well, in our example, Altera or Xilinx would like to be protected. They want to be sure that the system isn’t being manufactured with fraudulent parts, and that their design tools and IP are not being reverse-engineered. The makers of any third-party hardware IP included in the FPGA fabric would like to be protected. They want their IP to be used only where it’s appropriately licensed. The creators of the FPGA design want to be sure that nobody can snatch and reverse-engineer their hardware design, and the systems company wants to be sure that this is a real, non-cloned, non-overbuilt instance of their product. You know – one somebody actually paid them money for.
On the software side, there is a similar menagerie of protectees. The developers of the software IP, the operating systems, and the applications want to be sure that only a licensed version of their code is being used, and that nobody is reverse-engineering or hacking their software. Network providers want to be sure that the system is a good citizen of their network, and that it isn’t used to defeat their moats and guard towers. The end user of the system also wants their data to be protected, and they hold the system company accountable for the integrity of their information as well. Sadly, this is just a partial list. And the list of potential bad guys is just as long.
Know what else? Some of the good guys are also bad guys.
Yep, it’s true. The world is not a melodrama, made up of mustache-twirling Snidely-Do-Wrongs and white-hat-wearing Goodstrong McCleans. It’s actually more of a dark comedy with highly conflicted characters who swerve back and forth across the axis of good and evil like drunks on the centerline of a dark winding highway. Sometimes, security is even needed to protect people from themselves. “Are you sure about that? It could be pretty bad. Press OK if you really want to.”
Security people use a number of models to approximate their view of the world. Usually, those models describe “zones of trust” which are little guarded islands with virtual guard booths and lists of who is allowed to do what inside. Unfortunately, this model overlays on top of the ad-hoc landscape we described before, so there are all kinds of crazy things going on in another dimension that the trust-zone guards can’t even see.
The stacks that comprise modern systems have grown thick, and, with various layers coming from different suppliers, it’s understandable that our overall security system comes out a looking a little like a clown car. We rely on every layer and every piece of the system to worry about its own security in its own way, and against its own set of anticipated villains.
But, is the ad-hoc approach that’s landed in our laps necessarily the best way? Or should we have some grand unified field theory of security that manages all aspects of our systems, securing everything with one giant industrial-grade lock? Certainly a consistent approach would improve the user experience. Having to memorize passwords for some parts, get license keys for others, and use biometric identification or other crazy crypto tricks for still others, all in the same system, may be a little more than off-putting for the typical user. A unified approach to security could make that situation noticeably better. And, a lot of us have a tendency to put a bank-grade vault door on the front while leaving a flapping screen on the side entrance. Unified security would help to eliminate that issue as well.
But a unified approach would also give us a single point of failure. If someone conquers the big thing, depending on their methods, they might have an easier time defeating the whole thing and having full run of the system.
Most system engineers are not security experts, and that is a fundamental problem. There is a temptation to design our own security measures, but – lacking specific expertise, our efforts are often naive and inadequate On the other hand, if we employ third-party solutions, we run the risk of a common attack taking us out along with the rest of the third-party’s clients. Bad guys like to go for the hack that gives them maximum leverage, so if they can design an attack for a standardized security system, they’ve just broken into hundreds or thousands of systems instead of just one.
The best approach is to really understand your particular system, what and whom you are trying to protect, the potential consequences of a successful attack, and the types of attackers who would benefit from breaking in. That will let you scope your security efforts and decide what level of engineering investment and user inconvenience is prudent in locking down your product. Then, you should assess your own level of expertise in implementing the required measures. Don’t be afraid to bring in help if you need it, or to adopt third-party solutions if they’re appropriate. Chances are, your product is not defined by its security level, and you should spend the majority of your creative energy on the features that truly differentiate it for your customers.