“We’re boned.” – Bender Rodriguez
Debugging code is hard enough. Now we’re taking on a whole new level of software sanitation.
By that I mean security features – anti-hacking, backdoors, encryption, anti-malware, and so on. I hate the term “cybersecurity,” but that’s a pretty succinct description of what we’re all expected to add to our devices now. And we’re totally unprepared for it.
Fact is, most hardware and software developers (and their bosses) haven’t got the first clue about how to build in security. We might as well perform our own tooth extractions or appendectomies. We might have a vague idea that Something Needs To Be Done, but we don’t have the skills, the education, or the experience to do the job properly. Nor do we have the inclination. “Oh, yes, please! I don’t have enough work to do, so load more on my plate, boss! (Work for which I am wholly unqualified.)”
Conventional design and debugging goes through two stages. First, we create a new gizmo. Then, we poke at it to make sure it does all the things we want, and – just as important – we make sure it doesn’t do too many things that we don’t want. Plenty of studies have shown that we typically spend more time on the debugging phase than on the creative phase. That is to say, we spend more time removing bugs than we did putting them in.
The debugging phase itself also goes through two stages. We look for the expected failures first, and then we try to guess the unexpected failures – the weird corner cases, the off-by-one errors, the code glitches caused by outside hardware events, the one-in-a-million coincidences, and so on. What happens when this variable is incremented exactly at midnight? What if the user enters an invalid date? What if the Ethernet cable is hit by 10,000V just as we’re backing up a security key? That kind of oddball stuff.
But now there’s a third phase, and it’s a lot tougher to debug because, frankly, we don’t know what to look for. It’s the security phase. If everything has a Wi-Fi, Ethernet, USB, Bluetooth, or ZigBee connection, everything is a vector for malware. We have no idea where it’s going to come from or how it will manifest itself. And we have no background – no frame of reference, even – for solving such nebulous problems. Where do you even start?
“Hey, I just design thermostats. My products aren’t targets for cyberattacks. Go talk to the guys who make credit-card readers.” Nope. Wrong answer. Thank you for playing. It’s precisely the humble, mundane, wallflower products that will likely be targeted first, and it’s precisely because we don’t think they need security features. Doesn’t every spy movie in the world start with the bad guys finding the unprotected ventilation duct? The unshielded exhaust port? The USB slot in the back of the keypad?
When everything is connected to everything else, it doesn’t matter where the security hole lives. Chain, meet weakest link. And in our interconnected device topology, you likely won’t even know what other devices you’re connected to. Who designed that file server over there? Where did that sensor cluster come from? How safe is that new tablet that just showed up on my IP subnet?
This is what’s going to make debugging so infernally difficult. We can barely maintain reliable code on our own systems, let alone anticipate and somehow mitigate outside threats from someone else’s device – a device over which we have zero control. Even if your own device is thoroughly and completely hardened – an unlikely situation, frankly – how can you protect yourself against all the other devices on the same network? Who’s to say someone didn’t hack that keypad over there and send you a completely legitimate but totally harmful packet that compromises the security of something else downstream?
Worse, there’s the uncanny nature of many attacks. People can extract encryption keys from thin air without even so much as touching your hardware. They can monitor RF emissions. They can even listen for audible (and inaudible) sound waves coming off your hardware. Side-channel attacks and other spooky effects at a distance are now commonplace. Who even knew such a thing was possible a few years ago – and now we’re supposed to design systems that are immune to it?
In conventional debugging, we generally know what problems to look for. More importantly, we know when they’re fixed. And we know that they stay fixed. Does the system crash when it receives a malformed packet? Oops, my bad, we’ll patch that. Problem solved. But how do you debug for side-channel attacks that leave no trace? What are you going to do – read a few articles on the Internet, wave an oscilloscope probe near your device for an hour, and declare it safe? Based on what evidence? And thus one more insecure device goes out the door. Or, more likely, many thousands of identically and equally insecure devices are released onto an unsuspecting world.
It’s a bit like the argument for mandatory vaccinations. Herd immunity is increased when every member of the community is protected. All it takes is one or two “anti-vaxers” in the network to bring down the whole system. And whom do you blame in that case – the entry point of the infection, or all the other devices that were insufficiency protected against just such an occurrence? Which one are you?
We’re simply dealing with a technical problem – actually, an assortment of related technical problems – for which we are unprepared. Our bosses, managers, and funders aren’t any better informed than we are. Oh, sure, they’ll attend a few seminars and come back to wave an accusatory finger at the assembled troops and admonish them to “add security” before returning to Mahogany Row. We wish them good luck approving the additional budget, manpower, equipment, and (most of all) the additional time it will take to deal with this new mandate. True security experts are in short supply, so good luck hiring anyone who can actually help you (as opposed to just taking your money).
And what about your hardware resources? That little thermostat you’re working on has just an 8-bit MCU and a few KB of flash and RAM. Where are you going to put the security code? How are you then going to encrypt so it’s not reverse-engineered? Where’s the RF shielding supposed to fit? How can you prevent DPA attacks with that cheap and cheesy power supply? It’s the smallest, weakest, cheapest devices that are the most poorly equipped to defend themselves – thus making them the ideal targets.
We’ve got a bunch of amateurs* putting connected devices in our homes. Baby-monitor cameras, wireless routers, cellphones, light switches, garage-door openers, PCs, and even cars driving down the highway, have all been repeatedly hacked. Do you think the makers of those devices are going to delay shipping because it’s not secure yet? Would you? Ironically, even security features are hackable. Fingerprint sensors haven been spoofed with Krazy Glue or even Gummi Bears. And, as others have pointed out, fingerprints make lousy security tokens anyway because you can’t revoke them. Same goes for the concept of “pay by selfie.”
*Amateurs in the sense that they’re inexperienced with embedded device security. They might be masters of the universe when it comes to real-time C code, but total neophytes at blunting malware.
And, worst of all, security is impossible to fully debug. Unlike with a conventional software bug, you’ll probably never know when your device fails. It’s not as if the bad guys are going to notify you, so you’ll never get an opportunity to fix the problem that you didn’t know was there in the first place. Good luck with that.
Debugging was already hard, but at least we knew the enemy, and the problem was bounded by the confines of our own box. Now the world is your petri dish, and your job is to inoculate it. All of it. In the Internet of Things, it’s just one more thing to deal with.