Not too long ago, the IoT Security Summit happened – a day spent with all things security as they relate to the IoT. So today we’re going to go over a few interesting points that were made in the course of the event, while acknowledging that much more happened than what we’ll cover.
With all the hoopla over the IoT – and the attendant concerns about security, there’s certainly a lot of energy going into the topic of security by developers. How much of that rolls into actual product has yet to be proven, given that poor real-world IoT security examples are now providing fodder for conference speakers to make the point, “Here’s how you do it wrong.” And, of course, they get more column inches than do the folks that get it right.
So we’re not going to ride the gloat boat here. (OK, maybe just once.) Better to talk about how to do it right, according to speakers from Mocana, ARM, and TI.
The “o” Matters
Mocana made what, in retrospect, might seem like an obvious observation – yet it’s obvious only once you hear it. They pointed out the difference between “IT security” and “IoT security.”
IT security focuses on the network. It looks for odd activity and patterns of activity, often aided by artificial intelligence. As a result, breaches can take 180 days to be detected – and then another 80-90 days to remediate.
No way that’s going to work for the IoT. Any security issues must be detected immediately, so, rather than focusing on the network, the focus must be on the network nodes – devices. And Mocana’s emphasis is on trusted devices that are self-protecting. That includes IoT gadgetry as well as hubs or gateways connecting them. Any such devices can be points of entry; block those points and, hopefully, you block entry.
The critical elements to making this work are:
Know that the device is in a trustworthy state (this is a job for the manufacturer; the customer has to trust that this has been handled), and
Install only known-good updates from a trusted source (a joint job for the manufacturer and customer).
Obviously, the key theme running through this is trust. ARM amplified further on this theme, noting the challenges of the IoT ecosystem – which is much more fractured than an IT ecosystem. The number of gadgets and customers for any given vendor is going to be enormous. And we’ll presumably have an enormous number of vendors. So all of this is going to be arm’s-length: none of that, “Oh yeah, you can trust them – I had beers with the CEO just the other day.”
So that means that each participant in that system has to establish trust. You may be familiar with the concept of the root of trust (RoT); in this case, there is no single root. There may be multiple roots. Each component has to vouch for itself.
12-Step Security
ARM listed 12 different aspects of trust – some obvious, some less so.
Lifecycle management: from on-boarding to decommissioning.
- Root of Trust management: handling all of those roots and making sure they’re legit.
- Data protection: both off-line storage (at rest) and working (in use). Encryption is key here.
- Cryptography: locking down data and messages so that they’re safe from prying eyes.
- A good true random-number generator (TRNG): necessary for effective cryptography and authentication.
- Software validation and encryption: making sure that your software hasn’t been corrupted.
- Secure manufacturing: this is worth some more words below.
- Software-update validation: making sure customers can’t get hoodwinked into loading bogus code.
- Rollback protection: for when an update fails; you have to have a known good state to revert to.
- Trusted storage: where data goes to stay alive while not in use; this is about the storage itself, not whether or not the data is encrypted.
- Execution environment isolation: obviously ARM points to their TrustZone processors for this, although those tend to be bigger processors than might be found on an IoT gadget. Microkernels and other approaches can also work.
- Debug authentication.
- Debugging and Manufacturing
It was interesting that they mentioned debug security. This gets to something that TI said in a presentation on node security: all of the internals of a device – as well as the security itself – must be accessible during validation, test, and debug. But test and debug ports tend to be dangerous back doors. Even if the ports aren’t brought out in the actual product package, a hacker will happily break the package open if it means an easy way in.
So there has to be a way for a limited set of authorized people to get to those resources under tightly controlled circumstances without opening it up to everyone. TI spoke of locking the debug port down after testing was complete; ARM talks about authenticating access to debug.
Then there’s the manufacturing thing. And, based on these discussions and others even more so, I can’t help noticing that manufacturing houses seem to get a pretty bad rap. I’m not going to opine on my own behalf, having had no first-hand bad experiences, but, both here and at ICCAD (which we’ll talk about in another piece), there was talk about certain aspects of manufacturing being done only in a trusted site.
The reason this is key is because of… well… keys, at the very least. The security “provisioning” step happens during manufacturing. There are different ways of doing this, but, mostly typically, the device being manufactured will have a key loaded using a trusted platform module, or TPM. The key goes both into the device and, typically, into a database. When the device is on-boarded, its key is validated against the database.
So access to those keys and that database can be problematic, to say the least, if done by some factory of questionable repute. (Overbuilding and other typical sketchy manufacturing practices apply here as well, of course.)
What surprised me was the amount of discussion about doing part of your manufacturing in one place, while the delicate parts (from a security standpoint) are then done in a trusted site. My only guess here is that trustworthy sites charge a lot more, so you use the cheap and grungy house to save bucks and the house with integrity only for what’s necessary. Maybe there’s a different reason, but it certainly seems like it would add work and logistics to send your parts around to more than one house.
This is, of course, why you need roots of trust for your manufacturing houses as well – as many as the companies you use. Yeah, we got more RoTs than a root cellar!
The TI Version
TI had its own list of security layers that it used in their SimpleLink WiFi module. It included:
- Separate execution environment (a la isolation from ARM);
- Critical hardware accelerators; in their case, specifically for AES, SHA, RC4, PKA, and TRNG;
Encrypted storage; - Boot loader and bundle protection (more on this shortly); and
- Device identity.
Much overlap with what ARM has.
The software update validation piece merits some further elaboration. The “obvious” steps (“obvious” not meaning that they’re followed religiously) have always been to make sure an update is legit before installing. That might mean receiving a digest separately for confirming the integrity of the update. But TI noted one more aspect: testing that the update works before declaring the job done.
Why? Well, as an example (yes, this is the one gloat), a smart door-lock company sent out an update for one of their models. Problem is, they also sent it to users of a different model – and it made those devices decidedly not smart. So much so that the devices couldn’t even connect to the cloud, meaning that the company couldn’t solve the problem remotely.
So TI adds a step to their update routine: test the dang thing out, and, if it doesn’t work, roll back to the previous version (suggested also by ARM). This, of course, means more storage, since you need to be able to house both the current and new versions at the same time; you can’t just overwrite the old stuff. But, in return for that extra storage cost, you’ll likely be spared some support – and PR-gloating – nightmares.
More info:
IoT Security Summit
What do you think of these notions of trust in the IoT ?