feature article
Subscribe Now

Security Blanket

Protecting Your System in an Age of Paranoia

The year is 2010. Alone in the kitchen, 8 year-old Mikey pulls a cereal container down from the cupboard. He presses the “open” button. A tiny camera with a wide-angle lens grabs an image. Inside the lid, a low-cost embedded system with hardware video processing locates Mikey’s key facial features in the image and creates an identification map. It then downloads from the household wireless network a current database of the family members allowed access to that cereal at this time of day. Mikey is on the “disallowed” list. The lock holds fast. A text notification is already on its way to both parents’ mobile phones. Mikey is busted!

Security is a growing concern in almost every type of system design today. Some applications have a more pressing need than others, of course. The consequences of Mikey subverting the automated cereal protection system and downing a few unauthorized grams of carbohydrates are far less severe than, say, a security failure in an airliner engine control system. Almost all systems these days have at least rudimentary security concerns. In a few cases, security is paramount.

A somewhat undesirable corollary to Moore’s Law might say that the more gates we have available, the more we’ll tend to use. Why connect a simple switch directly to a control line when we can add a microcontroller that allows us to use a button, de-bounce the press action, check the status of the day/night condition, and illuminate the appropriate status LED? We sprinkle superfluous software and hardware into our systems like Emeril adding the final “Bam!” of seasoning to some exotic culinary creation.

The consequence of this complexity explosion is a trend toward systems with a plethora of security vulnerabilities. Usually, we don’t care. But in the cases where we do, the difficulty of maintaining rigorous security grows almost exponentially as the complexity of our basic system rises. Throw Moore’s Law into the mix, and you end up with double security holes squared. Not a pretty picture for the paranoid.

If you dare, go deadbolt the door, double-check your belt and suspenders, strap on your helmet, goggles, bullet-proof vest, latex gloves and kneepads, and let’s go explore (cautiously, of course) some of the issues in embedded system security today. First, as an engineer, it’s important that you understand statistics. Mastering the mathematics of probability will let you make one of the most important determinations in system security design – whether you’re protecting against an actual threat or simply a perceived one.

You might think engineers would be pragmatists – practical-minded folk who would never pander to paranoic delusions. In my experience, however, quite the opposite is true. Trained problem solvers, engineers tend to work to eradicate every possible failure mechanism, often without regard to the probability of a particular failure occurring, compared with the cost of preventing it.

As a case in point, I once worked on a large software development project in the very early days of object-oriented programming with C++. Our team had written hundreds of thousands of lines of the stuff, (poorly, I might add, as all of us were complete novices in object-oriented design, and decent compilers and debug tools didn’t yet exist). Just when we were at the peak of our development project frustration, the big lockdown notice came down from on high. The company was afraid of a security breach by our competitor, and our source code must be protected at all costs. Work almost ceased while elaborate procedures were developed to thwart these imagined thieves.

Personally, I thought that the best strategy we could have employed was to just give our source code to our competitors. Simply put it in a box and mail it to them. After months of effort, we could barely get the stuff to compile, let alone perform any facsimile of its intended function, and we’d written it ourselves. Even if our competitor was smart enough to get it to build successfully, the debug effort alone would surely have set them back years. The point, however, is that we had reacted irrationally to a perceived threat without doing a sound analysis of the cost of the security compared with the cost of a security failure. Our project was set back months, launched late, and missed an important market window as a direct result of our paranoid over-reaction.

There are a number of types of security to consider in systems design. Closest to home for us as engineers is, of course, the security of our intellectual property. The last thing we want to picture is some shyster stealing our hard-earned design ideas and competing with us in the market using our own technology. (…unless we’re doing open-source software development, of course, in which case we’re helping the technological proletariat revolution rise up to defeat the IP-mongering demons of corporate greed. Power to the people!) Beyond our own IP security, if we’re developing a subsystem or chipset that’s used by downstream designers, we need to be concerned about design security for our OEMs as well.

With outsourcing and globalized manufacturing becoming more the rule than the exception these days, there is a very real risk of our designs being stolen by the very people we trust to help us realize them. Overbuilding is probably the most common theft mechanism hitting systems designers today. It works like this: Manufacturers work hard all day building the units that you’ve ordered and shipping them to you in a timely manner. They then work hard all night building more of your product to sell themselves on the black market, using standard parts they acquired through normal channels. These identical (they were made on the same assembly line) products have a much higher profit margin than the ones you’re selling, of course.

The best defense against overbuilding is to have some component in your system for which you can control or monitor the inventory, or that you can license or activate only in the hands of an authorized user. If your system contains an ASIC, that’s a good place to start. Unless the overbuilders have a way to clone your ASIC (we’ll talk about cloning in a minute), they won’t be able to build working systems without tapping into your exclusive supply chain.

FPGAs can be used in a capacity similar to ASICs, but they can also provide a security hole if you’re not careful how you use them. Since FPGAs are standard parts, unscrupulous manufacturers have an easy supply available to them. All they need to do is to capture (or redesign) your configuration bitstream, and they’re right back building working systems again. FPGA manufacturers offer a variety of schemes to subvert these thieves, with varying degrees of effectiveness and design cost. SRAM-based FPGAs (the most common devices) typically rely on bitstream encryption strategies to keep your IP out of the evildoers hands. Non-volatile devices like flash- and antifuse-based FPGAs rely on different schemes that we’ll discuss separately.

The typical attacks on the ASIC or FPGA (custom logic) part of a design are cloning and reverse-engineering. Cloning is clearly the easy one, from the thieves’ point of view. If you’re worried about reverse-engineering, you should first get out that probability calculator and determine if such an attack on your design would be financially justified for the thief. Reverse-engineering is an expensive and time-consuming crime. In ASICs, it is widely discussed that reverse engineering can be carried out by examining the device under a microscope and plotting the locations of metal traces and vias, eventually unraveling the netlist for the design. If your design happens to be a 90nm ASIC with 10 layers of metal and a billion or more transistors, I’d say buy the thieves a microscope and wish them luck. Unless they’re way smarter than most of us, it’ll be decades before they have a working replica. Their black market Speak-and-Spell might be almost ready today.

FPGAs (in the old days) made much more attractive targets. Since the bitstream is stored outside the device in an external PROM, the programming bits between the prom and the FPGA could be intercepted, and the design could then be easily cloned. In order to prevent this, SRAM-based FPGA manufacturers now allow the bitstream to be encrypted, and an encryption key programmed into the FPGA device itself. Only an FPGA encoded with the correct encryption key can read the encrypted bitstream. You can have your device manufactured in an untrusted environment, then add the encryption keys in your own facility or by a trusted third party. Stealing your design now becomes a Bondesque adventure of feature-length proportions, complete with shady characters, secret codes, and cash payoffs – lots of fun to write about, but less than practical for most commercial purposes.

The non-volatile FPGAs like antifuse and flash devices are probably inherently more secure. There is always the microscope trick (described above), but with antifuse, it is extremely difficult to tell which junctions are fused and which are not. Without that distinction, all antifuse parts look alike. Flash is similar to antifuse, except that it is reprogrammable. If you plan to put a scheme into place to reprogram it in the field, you face similar challenges to those of SRAM FPGAs, with similar antidotes.

Beyond protecting your own interests and IP, there’s the issue of protecting those downstream from you – your OEMs and end users. They have issues with protection of their data and designs that live inside or flow through your product. In the second part of this series, we’ll look at their unique problems and the methods available to secure them as well. Until then, remain vigilant. Keep a sharp watch and always remember to wear your foil hat. You never know who’s listening.

Leave a Reply

Security Blanket

Protecting Your System in an Age of Paranoia

Security is a growing concern in almost every type of system design today. Some applications have a more pressing need than others, of course. The consequences of Mikey subverting the automated cereal protection system and downing a few unauthorized grams of carbohydrates are far less severe than, say, a security failure in an airliner engine control system. Almost all systems these days have at least rudimentary security concerns. In a few cases, security is paramount.

A somewhat undesirable corollary to Moore’s Law might say that the more gates we have available, the more we’ll tend to use. Why connect a simple switch directly to a control line when we can add a microcontroller that allows us to use a button, de-bounce the press action, check the status of the day/night condition, and illuminate the appropriate status LED? We sprinkle superfluous software and hardware into our systems like Emeril adding the final “Bam!” of seasoning to some exotic culinary creation.

The consequence of this complexity explosion is a trend toward systems with a plethora of security vulnerabilities. Usually, we don’t care. But in the cases where we do, the difficulty of maintaining rigorous security grows almost exponentially as the complexity of our basic system rises. Throw Moore’s Law into the mix, and you end up with double security holes squared. Not a pretty picture for the paranoid.

If you dare, go deadbolt the door, double-check your belt and suspenders, strap on your helmet, goggles, bullet-proof vest, latex gloves and kneepads, and let’s go explore (cautiously, of course) some of the issues in embedded system security today. First, as an engineer, it’s important that you understand statistics. Mastering the mathematics of probability will let you make one of the most important determinations in system security design – whether you’re protecting against an actual threat or simply a perceived one.

You might think engineers would be pragmatists – practical-minded folk who would never pander to paranoic delusions. In my experience, however, quite the opposite is true. Trained problem solvers, engineers tend to work to eradicate every possible failure mechanism, often without regard to the probability of a particular failure occurring, compared with the cost of preventing it.

As a case in point, I once worked on a large software development project in the very early days of object-oriented programming with C++. Our team had written hundreds of thousands of lines of the stuff, (poorly, I might add, as all of us were complete novices in object-oriented design, and decent compilers and debug tools didn’t yet exist). Just when we were at the peak of our development project frustration, the big lockdown notice came down from on high. The company was afraid of a security breach by our competitor, and our source code must be protected at all costs. Work almost ceased while elaborate procedures were developed to thwart these imagined thieves.

Personally, I thought that the best strategy we could have employed was to just give our source code to our competitors. Simply put it in a box and mail it to them. After months of effort, we could barely get the stuff to compile, let alone perform any facsimile of its intended function, and we’d written it ourselves. Even if our competitor was smart enough to get it to build successfully, the debug effort alone would surely have set them back years. The point, however, is that we had reacted irrationally to a perceived threat without doing a sound analysis of the cost of the security compared with the cost of a security failure. Our project was set back months, launched late, and missed an important market window as a direct result of our paranoid over-reaction.

There are a number of types of security to consider in systems design. Closest to home for us as engineers is, of course, the security of our intellectual property. The last thing we want to picture is some shyster stealing our hard-earned design ideas and competing with us in the market using our own technology. (…unless we’re doing open-source software development, of course, in which case we’re helping the technological proletariat revolution rise up to defeat the IP-mongering demons of corporate greed. Power to the people!) Beyond our own IP security, if we’re developing a subsystem or chipset that’s used by downstream designers, we need to be concerned about design security for our OEMs as well.

With outsourcing and globalized manufacturing becoming more the rule than the exception these days, there is a very real risk of our designs being stolen by the very people we trust to help us realize them. Overbuilding is probably the most common theft mechanism hitting systems designers today. It works like this: Manufacturers work hard all day building the units that you’ve ordered and shipping them to you in a timely manner. They then work hard all night building more of your product to sell themselves on the black market, using standard parts they acquired through normal channels. These identical (they were made on the same assembly line) products have a much higher profit margin than the ones you’re selling, of course.

The best defense against overbuilding is to have some component in your system for which you can control or monitor the inventory, or that you can license or activate only in the hands of an authorized user. If your system contains an ASIC, that’s a good place to start. Unless the overbuilders have a way to clone your ASIC (we’ll talk about cloning in a minute), they won’t be able to build working systems without tapping into your exclusive supply chain.

FPGAs can be used in a capacity similar to ASICs, but they can also provide a security hole if you’re not careful how you use them. Since FPGAs are standard parts, unscrupulous manufacturers have an easy supply available to them. All they need to do is to capture (or redesign) your configuration bitstream, and they’re right back building working systems again. FPGA manufacturers offer a variety of schemes to subvert these thieves, with varying degrees of effectiveness and design cost. SRAM-based FPGAs (the most common devices) typically rely on bitstream encryption strategies to keep your IP out of the evildoers hands. Non-volatile devices like flash- and antifuse-based FPGAs rely on different schemes that we’ll discuss separately.

The typical attacks on the ASIC or FPGA (custom logic) part of a design are cloning and reverse-engineering. Cloning is clearly the easy one, from the thieves’ point of view. If you’re worried about reverse-engineering, you should first get out that probability calculator and determine if such an attack on your design would be financially justified for the thief. Reverse-engineering is an expensive and time-consuming crime. In ASICs, it is widely discussed that reverse engineering can be carried out by examining the device under a microscope and plotting the locations of metal traces and vias, eventually unraveling the netlist for the design. If your design happens to be a 90nm ASIC with 10 layers of metal and a billion or more transistors, I’d say buy the thieves a microscope and wish them luck. Unless they’re way smarter than most of us, it’ll be decades before they have a working replica. Their black market Speak-and-Spell might be almost ready today.

FPGAs (in the old days) made much more attractive targets. Since the bitstream is stored outside the device in an external PROM, the programming bits between the prom and the FPGA could be intercepted, and the design could then be easily cloned. In order to prevent this, SRAM-based FPGA manufacturers now allow the bitstream to be encrypted, and an encryption key programmed into the FPGA device itself. Only an FPGA encoded with the correct encryption key can read the encrypted bitstream. You can have your device manufactured in an untrusted environment, then add the encryption keys in your own facility or by a trusted third party. Stealing your design now becomes a Bondesque adventure of feature-length proportions, complete with shady characters, secret codes, and cash payoffs – lots of fun to write about, but less than practical for most commercial purposes.

The non-volatile FPGAs like antifuse and flash devices are probably inherently more secure. There is always the microscope trick (described above), but with antifuse, it is extremely difficult to tell which junctions are fused and which are not. Without that distinction, all antifuse parts look alike. Flash is similar to antifuse, except that it is reprogrammable. If you plan to put a scheme into place to reprogram it in the field, you face similar challenges to those of SRAM FPGAs, with similar antidotes.

Beyond protecting your own interests and IP, there’s the issue of protecting those downstream from you – your OEMs and end users. They have issues with protection of their data and designs that live inside or flow through your product. In the second part of this series, we’ll look at their unique problems and the methods available to secure them as well. Until then, remain vigilant. Keep a sharp watch and always remember to wear your foil hat. You never know who’s listening.

Leave a Reply

featured blogs
Apr 19, 2024
Data type conversion is a crucial aspect of programming that helps you handle data across different data types seamlessly. The SKILL language supports several data types, including integer and floating-point numbers, character strings, arrays, and a highly flexible linked lis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

IoT Data Analysis at the Edge
No longer is machine learning a niche application for electronic engineering. Machine learning is leading a transformative revolution in a variety of electronic designs but implementing machine learning can be a tricky task to complete. In this episode of Chalk Talk, Amelia Dalton and Louis Gobin from STMicroelectronics investigate how STMicroelectronics is helping embedded developers design edge AI solutions. They take a closer look at the benefits of STMicroelectronics NanoEdge-AI® Studio and  STM32Cube.AI and how you can take advantage of them in your next design. 
Jun 28, 2023
33,824 views