feature article
Subscribe Now


Lost in the Security Labyrinth

No. Not another security article. Please, haven’t we all had enough? We’re afraid already. We are sick to death of the doomsday warnings about the number of glaring security holes in just about everything we touch and the inadequacy of our own security measures. We don’t want to be lectured again about how careless we’ve been. We don’t need to be pitched yet another snake-oil, safe-as-a-baby’s-bottom, can’t-survive-the-apocalypse-without-it, magic-button security solution – that costs only slightly more than the thing it’s protecting and probably makes it so hard to use that we’ll end up just giving up on the whole thing.

As an editor, I am pitched security stories constantly. It seems that new companies are starting up every single day with a mission to make money from our fear and paranoia. Yes, we could become the Henny Penny Technology Press, running around yelling about how the sky is falling and we’re all doomed. And yes, there are real security threats out there that require all of us – especially engineers – to take reasonable precautions. But our preoccupation with keeping the bad guys at bay may have gotten just a little out of hand, and it’s giving rise to an industry that’s possibly even less scrupulous than those it purports to defend us against.

Our view of security is – choose a metaphor – can’t see the forest for the trees, elephant and the blind men, too many cooks… Really, it’s just an ad-hoc mess. In engineering, we go around designing locks on just about everything we build. We want to be prudent and responsible, and often we’re not even sure exactly what we’re protecting or whom we are protecting it against. And, in the world of security, the cast of characters would make the credits run for several hours after the movie ends.

Let’s take a look at a hypothetical design using something like one of the new SoC FPGAs. These devices combine programmable hardware (FPGA fabric) with conventional processing subsystems all on one chip. They could be used to implement an enormous variety of systems, but let’s pretend for a moment that we’re designing some kind of high-value communications hub for home or industry. It will have custom hardware in the FPGA fabric, embedded software (including operating systems) running on the applications processors, wired and wireless communications, third-party applications, and so forth. In other words, it will be a pretty typical modern electronic system.

So, who needs protecting? Well, in our example, Altera or Xilinx would like to be protected. They want to be sure that the system isn’t being manufactured with fraudulent parts, and that their design tools and IP are not being reverse-engineered. The makers of any third-party hardware IP included in the FPGA fabric would like to be protected. They want their IP to be used only where it’s appropriately licensed. The creators of the FPGA design want to be sure that nobody can snatch and reverse-engineer their hardware design, and the systems company wants to be sure that this is a real, non-cloned, non-overbuilt instance of their product. You know – one somebody actually paid them money for.

On the software side, there is a similar menagerie of protectees.  The developers of the software IP, the operating systems, and the applications want to be sure that only a licensed version of their code is being used, and that nobody is reverse-engineering or hacking their software. Network providers want to be sure that the system is a good citizen of their network, and that it isn’t used to defeat their moats and guard towers. The end user of the system also wants their data to be protected, and they hold the system company accountable for the integrity of their information as well. Sadly, this is just a partial list. And the list of potential bad guys is just as long.

Know what else? Some of the good guys are also bad guys.

Yep, it’s true. The world is not a melodrama, made up of mustache-twirling Snidely-Do-Wrongs and white-hat-wearing Goodstrong McCleans. It’s actually more of a dark comedy with highly conflicted characters who swerve back and forth across the axis of good and evil like drunks on the centerline of a dark winding highway. Sometimes, security is even needed to protect people from themselves. “Are you sure about that? It could be pretty bad. Press OK if you really want to.”

Security people use a number of models to approximate their view of the world. Usually, those models describe “zones of trust” which are little guarded islands with virtual guard booths and lists of who is allowed to do what inside. Unfortunately, this model overlays on top of the ad-hoc landscape we described before, so there are all kinds of crazy things going on in another dimension that the trust-zone guards can’t even see.

The stacks that comprise modern systems have grown thick, and, with various layers coming from different suppliers, it’s understandable that our overall security system comes out a looking a little like a clown car. We rely on every layer and every piece of the system to worry about its own security in its own way, and against its own set of anticipated villains.

But, is the ad-hoc approach that’s landed in our laps necessarily the best way? Or should we have some grand unified field theory of security that manages all aspects of our systems, securing everything with one giant industrial-grade lock? Certainly a consistent approach would improve the user experience. Having to memorize passwords for some parts, get license keys for others, and use biometric identification or other crazy crypto tricks for still others, all in the same system, may be a little more than off-putting for the typical user. A unified approach to security could make that situation noticeably better. And, a lot of us have a tendency to put a bank-grade vault door on the front while leaving a flapping screen on the side entrance. Unified security would help to eliminate that issue as well.

But a unified approach would also give us a single point of failure. If someone conquers the big thing, depending on their methods, they might have an easier time defeating the whole thing and having full run of the system. 

Most system engineers are not security experts, and that is a fundamental problem. There is a temptation to design our own security measures, but – lacking specific expertise, our efforts are often naive and inadequate On the other hand, if we employ third-party solutions, we run the risk of a common attack taking us out along with the rest of the third-party’s clients. Bad guys like to go for the hack that gives them maximum leverage, so if they can design an attack for a standardized security system, they’ve just broken into hundreds or thousands of systems instead of just one. 

The best approach is to really understand your particular system, what and whom you are trying to protect, the potential consequences of a successful attack, and the types of attackers who would benefit from breaking in. That will let you scope your security efforts and decide what level of engineering investment and user inconvenience is prudent in locking down your product. Then, you should assess your own level of expertise in implementing the required measures. Don’t be afraid to bring in help if you need it, or to adopt third-party solutions if they’re appropriate. Chances are, your product is not defined by its security level, and you should spend the majority of your creative energy on the features that truly differentiate it for your customers. 

2 thoughts on “Ad-Hocurity”

  1. It comes down to responsibly taking risks and doing due diligence for mitigating and evaluating those risks.

    When dealing with other valuable items, there are well established vetting processes, and more importantly required bonds, for doing business.

    Surety bonds are frequently the basic necessary part of getting a customer to trust you, as it means that your business has been carefully vetted by the bond holder. The customer knows that if your business fails to perform, that the bond issuer will cover the losses.

    Fidelity bonds are frequently required to cover risky losses by rogue employees. Again the bond issuer will thoroughly vet both the employer and the employee before issuing a Fidelity bond.Thus the customer knows they are covered if the bond is breached, and that the bond issuer vetting process mitigates risks for the customer.

    Lastly License and Permit bonds are frequently required to make sure the companies and people you are doing business with, are actually following required regulations and best practices for the industry. Again customers can trust the bond issuer for doing more than basic due diligence in vetting those covered by the bond.

    All of this needs to be applied to the cyber security industry, and needs to be a critical part of vendor selection and contractor selection. If a vendor can not back up their snake oil sales pitches with a surety bond it’s time to find a more reputable vendor that will. If you allow a contractor into your business, that will handle valuable inside knowledge, trade secrets, and business critical IP without their providing one or more bonds to protect you, then you have made a poor choice of contractors.

    Think this is not necessary? Consider bonding like this is absolutely necessary in the financial world … and if your companies IP is of equal value, then you MUST take the same necessary steps to mitigate risks and do the REQUIRED due diligence.

    It’s probably time US regulators start looking past financial crimes, and require similar oversight to cyber security and IP crimes.


  2. Late comment here — you managed to publish this important line of thinking during my paternity leave~

    You of course nail the business issue in your very last sentence: companies try to identify the users that will ‘pay for’ security, and security just doesn’t usually make the top-three discriminating features list in most major systems.

    A thousand different articles, however, say that this requirement rack-and-stack will change with cloud computing and IOT. I think they are right at least with respect to cloud processing and data.

    Open Standards and Interfaces are the nearest tool we have to adopting any unified approaches to security, but those have the same business challenges.

Leave a Reply

featured blogs
May 7, 2021
In one of our Knowledge Booster Blogs a few months ago we introduced you to some tips and tricks for the optimal use of Virtuoso ADE Product Suite with our analog IC design videos . W e hope you... [[ Click on the title to access the full blog on the Cadence Community site. ...
May 7, 2021
Enough of the letter “P” already. Message recieved. In any case, modeling and simulating next-gen 224 Gbps signal channels poses many challenges. Design engineers must optimize the entire signal path, not just a specific component. The signal path includes transce...
May 6, 2021
Learn how correct-by-construction coding enables a more productive chip design process, as new code review tools address bugs early in the design process. The post Find Bugs Earlier Via On-the-Fly Code Checking for Productive Chip Design and Verification appeared first on Fr...
May 4, 2021
What a difference a year can make! Oh, we're not referring to that virus that… The post Realize Live + U2U: Side by Side appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Keys to quick success using high-speed data converters

Sponsored by Texas Instruments

Hardware designers using modern high-speed data converters face tough challenges. Issues might include connecting with your field-programmable gate array (FPGAs), being confident that your first design pass will work, or determining how to best model the system before building it. In this article, Texas Instruments takes a closer look at each of these challenges.

Click to read more

featured chalk talk

PCI Express: An Interconnect Perspective

Sponsored by Samtec

New advances in PCIe demand new connectors, and the challenge of maintaining signal integrity has only gotten tougher. In this episode of Chalk Talk, Amelia Dalton chats with Matthew Burns of Samtec about why PCIe isn’t just for PCs, what PCIe over fiber looks like, and how Samtec’s team of signal integrity experts can help with your next PCIe design.

Click here for more information about PCI Express® Solutions from Samtec