The FCC is worried. You and they spend all this time and energy getting your radio certified, and then some bozo hacks in, changes how the radio works, and puts you out of spec.
And so, back in early 2015, the FCC issued some guidelines or questions regarding WiFi devices – particularly home routers – in an effort to ensure that your radio isn’t hackable.
The result has been that some router makers have simply locked down the platform so that it’s no longer possible to do after-market modifications, and this has caused an outcry by after-market modifiers. The reason why it’s an issue is that these open-source developers have used the platform for adding apps or other software that, presumably, have nothing to do with the radio.
In an attempt to find the magic middle way, the prpl organization, headed by Imagination Technologies (IMG) and featuring the MIPS architecture, recently put out a proof of concept that they say gives both assurance to the FCC and freedom to open-source developers.
Questions from the FCC
The FCC didn’t ban modifications, but their guidelines are intended to provide assurance that those apps won’t mess with the radio. They issued a document (linked below) that laid out a series of questions, the answers to which would provide that assurance – or not. Randomly-selected examples are:
- “Describe how any software/firmware update will be obtained, downloaded, and installed. Software that is accessed through manufacturer’s website or device’s management system, must describe the different levels of security.”
- “Describe all the radio frequency parameters that are modified by any software/firmware without any hardware changes. Are these parameters in some way limited, such that, it will not exceed the authorized parameters?”
- “What prevents third parties from loading non-US versions of the software/firmware on the device? Describe in detail how the device is protected from “flashing” and the installation of third-party firmware such as DD-WRT.”
- “For a device that can be configured as a master and client (with active or passive scanning), if this is user configurable, describe what controls exist, within the UI, to ensure compliance for each mode. If the device acts as a master in some bands and client in others, how is this configured to ensure compliance?”
Source: “SOFTWARE SECURITY REQUIREMENTS FOR U-NII DEVICES,” Federal Communications
Commission Office of Engineering and Technology Laboratory Division, March 18, 2015
As an OEM, it’s probably easier simply to answer everything with, “No one gets in except us” and lock everything down. Not the answer the open-source community wants.
What the prpl organization did was to demonstrate how the radio could be protected from hackers while still providing a sandbox for open-source developers to play in.
Hyperventilatvising
Their solution involves virtualization, which is not a new notion. While there are different approaches to virtualization, they all operate on a similar concept: create containers for different apps – potentially with different operating systems (referred to as “guest OSes”) and isolate them from direct interaction with key parts of the hardware.
A hypervisor acts as the referee, deciding what’s allowed and blocking what isn’t. While having the hypervisor intercede in every instruction could slow things down immensely, the prpl approach involves a hypervisor that watches what’s happening, allowing some access to hardware, and stepping in when something shady appears afoot or when multiple access to shared resources needs adjudication.
The prpl/MIPS solution doubles the number of “modes” in which software can operate. Linux has the well-understood “kernel” mode, with higher privileges, and “user” mode, where the hoi polloi play. Most applications run in user mode, while kernel modules service the user-mode code. Prpl has created two versions of this model: Root and Guest. Root kernel mode is the most privileged, followed by root user, then guest kernel and guest user. Guest kernel mode just thinks it’s touching the hardware; Root kernel mode is truly touching the hardware.
Image courtesy Imagination Technologies
Guest-level software can be added and updated by “anyone.” Root-level code, however, can be updated only by the original manufacturer.
There’s a two-level memory map that goes with this. One operates at the guest OS level, and it operates just like it would were the OS implemented directly over the hardware. That makes it easy to use the same software in or out of a virtualized environment with minimal, if any, change. The second level maps the “resolved” guest memory addresses to actual physical memory. This is done at the hypervisor level.
What you end up with is application isolation, with some programs having more privilege than others. The obvious impact of this is that the radio software can run in a trusted domain, inaccessible to other processes. Open-source programs can still run in their own containers; they no longer have access to the radio code.
One interesting operating assumption prpl uses is that the guest OS will be hacked. Not might be hacked. So it’s up to the hypervisor to swat away any illegitimate attempts by one domain to access another.
Which, of course, puts pressure on the hypervisor to be iron-clad. Prpl says that such a hypervisor would involve a nominal amount of code – around 200K bytes or so – with an extremely limited interface. This shrinks the “attack profile” – a common phrase referring to how many ways there are to break into something.
Having smaller code means two things. First, there’s less code to hack; second, it’s easier to scrutinize the entire program before it’s deployed, looking for weaknesses. In other words, it’s easier to vet a small program than a large one.
Similarly, by reducing the number of interface commands (i.e., having a small API), each entry point can be more thoroughly vetted and firewalled, and, put simply, there are fewer ways into the hypervisor. So a key operating assumption here is that the hypervisor can be trusted.
IMG has added some hardware support for virtualization in their processors, including control registers that the guest domains can access as if they were accessing the main processor registers. This improves performance – although, again, the hypervisor has to keep an eye out for access to registers controlling shared resources so that it can referee any conflicts or collisions.
The hypervisor can also be programmed with a variety of rules that let it intervene if “suspicious” activity occurs. One of the examples that prpl raises is the DDR DDS attack – where a program bombards the DRAM with requests so that everything else stalls. At some point, the hypervisor would decide that it’s had enough of this nonsense and shut that access down.
Their OmniShield setup is shown below, with multiple possible guest and trusted apps operating over the hypervisor, and with other key code – like secure boot – working directly over the hardware. Note the presence of what they call a TPM-lite: this is an example of the kind of device we discussed in our piece on key management and protection. It’s something bigger than a secure element used on payment card chips and smaller than the trusted platform module (TPM) used in computers. Such devices are being targeted for the IoT in order to make edge devices secure at a reasonable cost.
Image courtesy Imagination Technologies
TrustZones?
I asked for a comparison to ARM’s TrustZone setup, where one core is walled off for secure, trusted operation, with limited access by the rest of the system. It bears noting, of course, that ARM and erstwhile MIPS (which IMG acquired) are like Apple and Wintel – intense competitors. And yet ARM has had a field day in the embedded world, and they’ve touted their TrustZone as a way to protect critical code from tampering.
Prpl agree that the TrustZone thing has some similar concepts, but the key difference is that, with TrustZone, the trusted area is dedicated to a particular CPU core. That means that you have untrusted code running on other cores and all trusted software working together in the same zone.
This means that all of the trusted programs need to trust each other, since they’re not isolated from each other. The prpl approach relies exclusively on virtualization to provide many domains (up to 256 in one specific case). Different trusted programs can be isolated from each other, resulting in less night-time tossing and turning while you wonder if programs you trust turn rogue in the company of other trusted programs.
More info:
Do you think that prpl’s virtualization proof-of-concept solves the FCC’s concerns?