They say that security can’t be designed as an afterthought. That security must be thought through as an early aspect of planning and architecture design. (“They” being… well, the same “they” that say lots of things.)
It would be so much easier if a small design house could have one security expert (at most) that everyone could run to with last-minute implementation requests. “Hey, we need PKI authentication. Can you quick add that in?” Or, “Hey, it turns out that, if we bury the same private key in every device, then, if it gets hacked, all devices are exposed. Who knew?! So… can you bury it deeper please?” “Hey, we need to say we have security on the data sheet. Can you do something so we can say that and have it be plausible? Kthxbai!”
But if we need to think this stuff through from the get-go, then… well, the implementation folks aren’t really involved then. So you need at least one person to deal with the high-level aspects of security. In other words, the security policy. When that’s in place, then we need a code jockey to implement that policy. (Yes, you could have one person do both… but bear with me.)
To be clear, there are two aspects (at least) to security:
- There’s the infrastructure: the capacity for the 3 As (Authentication, Authorization, and Attestation), encryption, secure boot, secure updates, etc.
- Then there’s making sure that there aren’t any hidden gotchas in the rest of the software code that might provide a way for someone to hack past the security infrastructure.
It’s that first one that we’re going to focus on here. The other is also important, but we’ll leave that for another time. Basic security infrastructure is a major addition to small, simple designs that have never had to consider security before. And there’s not necessarily any one-size-fits-all solution. Yeah, the high-level things you need to have may cover lots of systems, but it’s the details of what they’re intended to protect and how they protect them that vary from system to system.
This discussion comes courtesy of an announcement by IAR and Secure Thingz (now wholly owned by IAR) that’s intended to simplify the implementation of robust infrastructure. And the products that they’ve announced – created by Secure Thingz and made available in IAR’s environment – are split according to the two phases of security design that we mentioned above.
Creating Embedded Trust
We start with that most critical aspect of security: creating the policy. At this stage, all implementation details are abstracted away. In fact, you could be creating a single policy that would apply to a whole series of products. Those products might have different microcontrollers (MCUs); some might implement security in software, others in hardware. Doesn’t matter: the policy can apply to them all equally – because the policy doesn’t hinge on lower-level implementation options.
Here you’re answering questions about how you want to handle certificates and keys, how you want to do secure updates, and what aspects of the software suite should be attested at boot-up (or at other times). We’re not talking about elements that will be implemented in app software or will be laid over an RTOS or other OS; we’re talking about functions that will be embedded in the boot code and come up as a fundamental part of the low-level system architecture.
The security policy should be digestible by anyone with solid security expertise – whether or not they’re equipped to code that infrastructure.
Secure Thingz provides a way to capture all of this in a machine-readable security profile. They call it Embedded Trust. The profile contains enough specificity that implementation can proceed directly from the policy, modulated by the details of the underlying platform – the MCU and other hardware.
Can You See Trust?
The second part then takes place for every system as part of the implementation, and it’s handled by another product called C-Trust. This is used, not necessarily by security planners and architects, but by coders.
Ordinarily, coders could read the security profile and then… well, write code that implements the policy. Hopefully correctly. Not to cast aspersions on the huge population of coders out there, but manual interpretation of policies is subject to interpretation. Give two coders the same profile, and you run a decent chance of ending up with two different implementations.
Part of that could be due to ambiguities in the strategy as written. By intent, anyway, a machine-readable Embedded Trust profile should be clear as to what’s needed. But are all implementations the same? Is the code airtight for each one? Was anything inadvertently omitted from the implementation?
Secure Thingz have decided that, within an environment like the one provided by IAR, an easier, less error-prone approach is to have a tick box for security. If selected, then you get an opportunity to select a profile to import into the project. A profile created by Embedded Trust can be interpreted by C-Trust. That profile, along with specific knowledge of the MCU and other security hardware (like encryption acceleration) for a specific system, lets C-Trust auto-generate boot code that implements the details of the security policy.
That all sounds straightforward – maybe, depending on some details. Code generators often create source code that is then compiled into the rest of the software. If that’s the case, then what if there’s a compile error involving the security code that the coder didn’t write by hand? On the other hand, if object code is used, then might some issues create a need to debug the security stuff – again, that the coder didn’t write or compile?
I checked on this with CEO Haydn Povey, and he clarified that the IDE imports binary code in a filed called the SREC. The Security Manager then takes that SREC, modifies it according to the system and policy, and puts out a new version called the mastered SREC. This is the actual binary code that will be executed in the system.
As far as the IDE is concerned, the security code is treated as a blob of data, so it is never involved in the compile process, and it should cause no issues at that point. He also said that the code should never cause any operational issues that would require working through the auto-generated code. ‘Correct by construction,” I guess you’d call it. Specifically, there should not be instances where the SREC code needs to be accessed by a debugger.
Embedded Trust and C-Trust don’t necessarily cover everything you need to ensure a secure device – like making sure code doesn’t open back doors – but it could be a decent step in making security easier to implement in a consistent, robust fashion.
Haydn Povey, Founder and CEO, Secure Thingz