“Heeeyyy, them’s some real nice FPGAs you got there.”
“Why, thank you.”
“Yeah, my brother and me – we was just discussin’ how you got yourself all them nice FPGAs, and how it would be a shame if somethin’ bad was to happen to ‘em.”
“Bad? What do you mean?”
“Well, y’know – like if somebody was to bust your security and get in there and, uh, y’know, steal all yer design stuff.”
“Well, we’ve got a lot of security features built in to protect us.”
“Yeah, see, that’s what we wanted to discuss with you. My brother, he’s pretty good at breakin’ security stuff, and so we was playin’ with them FPGAs of yours and… Boom! We busted right through that security. We, uh, published some papers on it so everybody could see how we done it.”
“Wow, that’s not very helpful.”
“So, ya see, for a nominal fee, my brother and me, we figure we could make it so nobody breaks into your FPGAs again. We know some ways to change the locks – if you get my meaning. We’d like to help ya out.”
“Um, thank you, that’s a nice offer, but I think I’ll just change the locks myself.”
“Ah, see, well – there’s the rub. My brother and me – we’re the only ones allowed to change the locks like that. We got us some – what ya’ call Patent Protection. I think you might be well advised to avail yourself or our services. Y’know – just to be sure nothin’ unfortunate happens to them nice chips of yours.”
There has been a small controversy storm in the world of FPGA security lately. If you don’t feel like reading the rest of the story, it probably means that you’re among the 99.9% (our estimate) of FPGA users who should go calmly about their business-as-usual and not worry about the recent publicity surrounding the security of FPGA configuration bitstreams against Differential Power Analysis (DPA) side-channel attacks.
However, the tale of FPGA security is one of white hats and black hats, daring and intrigue, measures and countermeasures, security and paranoia, ethics and profits, IP protection and collaboration. We begin our story in the late 1990s, when a small company called Cryptography Research (CRI) came up with a new category of side-channel attacks based on power analysis. (Side-channel attacks are so named because they operate outside the main datapath and don’t require physical access to the datapath – instead relying on passive, non-invasive techniques such as, in this case, monitoring only the power used by the device.) These two attacks, Simple Power Analysis (SPA) and Differential Power Analysis (DPA) caused huge waves in the security community when they were invented. These attacks involve monitoring the power to the device and, via some very clever signal processing, retrieving encryption keys from the waveforms as observed. Cryptography Research proved, among other things, that smart cards like those used by financial institutions were vulnerable to attacks by small, highly-intelligent teams with almost no resources (a common PC and a storage scope) and almost no access to the inner workings of the targeted systems.
This was a huge problem… in 1999.
Cryptography Research then set about developing a comprehensive portfolio of countermeasures that could be used to defend against these attacks, and, as people tend to do with their intellectual property, they patented them and began licensing them to companies that wanted protection from these attacks – thus putting the company in a lucrative and ethically challenging position. More on that later.
FPGAs were not particularly a factor in the security equation back then. At that time, nobody was very worried about encrypting the configuration data loaded into FPGAs, and few people were doing FPGA designs that did crypto operations. However, as years passed and FPGAs became more capable, people started thinking about the security of their data as it passed through FPGAs, and about the security of the FPGA designs themselves.
Cryptography Research saw the potential for licensing their IP to designers who wanted to use FPGAs for cryptographic operations. This business did not require the cooperation of the FPGA companies themselves. If you were doing a design with just about any FPGA, you could license CRI countermeasures and include them in your FPGA design – making your crypto much trickier to crack. For the small number of companies interested in FPGA-based cryptography, this does the trick. For the rest of the world – as we said earlier – nothing to see here, move on along.
The remaining hole in the FPGA scenario was the configuration of the FPGA. If your priority was protecting your design IP itself (rather than the data that your design was processing) you were in a precarious position. Back when SPA/DPA controversy was swirling, however, FPGA bitstreams were not even encrypted. Anybody with a tiny bit of motivation could watch your FPGA configure itself and read the bitstream file directly – no hacking required. If they wanted to make a copy of your design, they could load that bitstream right into their own FPGA and they’d be good to go. Assuming, of course, that the rest of the stuff hooked up to the FPGA matched. This means that FPGA designs were vulnerable to copying, cloning, overbuilding, and other related malfeasance. Of course, all the non-programmable parts on your board were already available off the shelf, so this just put the FPGA part on a semi-level playing field. The FPGA did not prevent copying your product, but it did not facilitate it either.
Reverse-engineering the bitstream (if the bad guys wanted to actually understand or modify your design) is a significantly harder challenge than simply copying it and installing it in another FPGA. Reverse-engineering has been proven possible in academic papers, but it requires a lot of time and patience, as well as an in-depth understanding of the operational details of the specific FPGA vendors’ proprietary design tools. It can be done, but it would require a lot of patience and determination. Probably, your design isn’t interesting enough – no matter how proud you are of it.
Design teams that wanted to protect their designs from these threats had no solid solution. They asked the FPGA companies for help. Several years ago, in response to these requests, FPGA companies added security features to allow the stored bitstream to be encrypted, with the FPGA decrypting it as it loads during initial configuration. These cobbled-on security measures were controversial from the start. Xilinx’s initial solution relied on encryption keys being stored inside the FPGA in volatile memory, and the contents of that memory being protected by a small battery attached to the board. Since the key storage is volatile, just about any tamper attempt results in the keys being lost. However, since the key storage is volatile and depends on a small battery mounted precariously on the board near your FPGA, it’s pretty easy for the keys to be lost – rendering your device useless.
Altera’s first solution involved storing encryption keys in fuses on the FPGA itself. Upside – no reliability vulnerability like Xilinx’s battery solution. Downside – a little rubbing compound and patience, and one might be able to figure out where those fuses are on the device, which ones were blown and not blown, and thus retrieve the encryption keys by physical invasion of the part. As one might expect, Xilinx and Altera had a PR and marketing smackdown, arguing whose solution was better. The predictable result was that both companies adopted the other company’s solution as an option.
Way in the back of the room, we could see a hand go up. “Hey, wouldn’t FPGA bitstream encryption be vulnerable to DPA attack?”
“Quiet there in the back please! We are debating the relative merits of Altera versus Xilinx security. We have no interest in your ideas which could apply equally to both.”
The folks in the back of the room wouldn’t be quiet, though. It seemed pretty obvious that a straightforward application of DPA or SPA would be able to retrieve the keys from FPGA bitstream encryption. Most people involved in security agreed that FPGA bitstreams were vulnerable to DPA attacks. FPGA vendors made circumstantial arguments that they were not, adopting an “OK, show me an example” position. For years, nobody demonstrated a successful side-channel attack on encrypted FPGA bitstreams. “Good enough,” said the FPGA vendors – and they went on about their normal business.
The current controversy was set off when graduate students at Ruhr University in Bochum, Germany finally successfully grabbed encryption keys used for FPGA bitstream encryption – and published their results and methods. Since the entire security community had long believed that this attack was not only possible, but also fairly straightforward, one is led to ask why nobody had published results before. The answer is anybody’s guess. Many of the papers on new attacks are published as part of graduate work at universities. Perhaps professors were reluctant to sponsor a many-month project whose goal was to formalize an attack already widely believed to be easy. The four authors of the current paper said their attack required about six months of reverse-engineering work to set up, and then they were able to retrieve the encryption keys by monitoring a single power-up sequence. Perhaps no other team during the past decade or so has felt like spending six months of engineering time to show an example of something already widely held to be true?
What the new results demonstrated is something that should have already been blatantly obvious for the past decade. Somebody with graduate-level education in encryption technology and a few months of time on their hands can break FPGA bitstream encryption – enabling them to do all the things they could already do with the majority of FPGA designs (which don’t even use bitstream encryption in the first place): namely, copy and install the bistream on another, identical FPGA and/or begin the long and arduous process of reverse-engineering parts of the original design from the bitstream – a process which, as we already mentioned, is doable with sufficient motivation.
All the recent controversy is really nothing new. Security experts have believed this exact attack was possible for years, and they have publicized that belief. If you truly cared about the security of your design, and you were relying solely on FPGA bitstream encryption to protect you… well, shame on you, but you are no less safe today than you were at any time in the past decade.
Padlocks Don’t Protect Against Bulldozers
When you buy a padlock at the hardware store, you are purchasing a known level of security. Before you even open the package, you know that a sufficiently motivated thief with a pair of bolt-cutters can defeat the lock. A bulldozer will make the lock, the door to which it is attached, and the entire storage shed irrelevant as a safeguard for your valuables. You adopt security measures that are proportional to the threat that you perceive. If you are defending against attackers for whom a padlock is sufficient deterrent, a padlock will suffice.
Similarly, if you’re defending your FPGA design against people for whom a graduate-level education in cryptography, a storage scope and a PC, and six months of focused time on your design is a sufficient deterrent, the existing bitstream encryption schemes are still probably adequate. According to our surveys on engineering effort involved in a typical FPGA design, that is more engineering time and talent than was required to complete the FPGA design in the first place. The attackers would be better served to just do what you did – and create their own design from scratch.
If you’re one of the people for whom security is extra-important – perhaps you work on defense-related systems, or set-top boxes, or machines that deal with financial transactions – you have almost certainly factored this vulnerability into your security plans years ago. There’s no new information for you here.
In preparing for this article, we contacted several FPGA companies, as well as Cryptography Research. The responses and lack of response was telling. The FPGA companies don’t want you to get all worked up about this announcement. That position is understandable. They don’t want needless panic and confusion in their customer base. Paraphrasing the responses from Xilinx and Altera to our request, we got something like “Hey, yeah – we saw that announcement. Uh, security is a big and complicated subject and we don’t like to talk about it much, and we kinda already knew this might happen, and we’re pretty sure our competitor’s security isn’t as good as ours is, and there are lots of other – Hey! Look over there! Is that the Goodyear Blimp?”
Actel/Microsemi had a different response. They had to. For years, they have been promoting their FPGAs not just as having security features, but also as platforms for securing your whole design. Remember the Actel ads with the big vault door? They’ve made security a big priority as well as a marketing point – a smart move, since the lion’s share of their business has historically come from mil-aero customers. Microsemi is the only FPGA vendor (as far as we can determine at this point) who has licensed CRI’s countermeasures. This means that Microsemi customers can add CRI countermeasures to their designs without having to negotiate a separate license deal. The CRI license fees are built into the deal when you buy the FPGAs. However, the company has not yet put any of the DPA countermeasures into the configuration path, so for configuration you are in the same boat as you are with other FPGA vendors.
Since Microsemi’s FPGAs are flash-based, they offer a feature where, once configured, you can blow a fuse and render that configuration permanent. Your FPGA now does not have to go through reconfiguration at startup, so there is no bitstream to capture. As long as you don’t need in-system reconfigurability, you can go back to sleeping well at night.
The bottom line for system designers is – if you want to build secure systems based on FPGAs, you can still do that. You’d just better count on the bitstream being hackable by an intelligent, determined attacker. If you’re counting on bitstream encryption as your primary or only protection against these adversaries, you’d better think again. If you’re a mainstream FPGA customer doing normal, mainstream FPGA design, continue as you were. There’s nothing new to see here.
The ethics of the attack and countermeasure business are slippery at best. There are two ways to view the business model. One is that companies like CRI are doing little more than selling “protection” – in the pejorative sense of the word. They find a vulnerability in your system security and announce it to the world, then they offer you patented, proprietary solutions to protect against the problem that they, arguably, created in the first place. The alternative view (and the one proffered by CRI and similar organizations) is that they’re doing the industry a service by finding the security holes before the real bad guys do, and making viable commercial solutions available to protect people from those attacks. You can decide for yourself.
Meanwhile, carry on with your usual FPGA work. There’s really nothing new here.
10 thoughts on “Cloak and Smoke and Dagger and Mirrors”