feature article
Subscribe Now

Ad-Hocurity

Lost in the Security Labyrinth

No. Not another security article. Please, haven’t we all had enough? We’re afraid already. We are sick to death of the doomsday warnings about the number of glaring security holes in just about everything we touch and the inadequacy of our own security measures. We don’t want to be lectured again about how careless we’ve been. We don’t need to be pitched yet another snake-oil, safe-as-a-baby’s-bottom, can’t-survive-the-apocalypse-without-it, magic-button security solution – that costs only slightly more than the thing it’s protecting and probably makes it so hard to use that we’ll end up just giving up on the whole thing.

As an editor, I am pitched security stories constantly. It seems that new companies are starting up every single day with a mission to make money from our fear and paranoia. Yes, we could become the Henny Penny Technology Press, running around yelling about how the sky is falling and we’re all doomed. And yes, there are real security threats out there that require all of us – especially engineers – to take reasonable precautions. But our preoccupation with keeping the bad guys at bay may have gotten just a little out of hand, and it’s giving rise to an industry that’s possibly even less scrupulous than those it purports to defend us against.

Our view of security is – choose a metaphor – can’t see the forest for the trees, elephant and the blind men, too many cooks… Really, it’s just an ad-hoc mess. In engineering, we go around designing locks on just about everything we build. We want to be prudent and responsible, and often we’re not even sure exactly what we’re protecting or whom we are protecting it against. And, in the world of security, the cast of characters would make the credits run for several hours after the movie ends.

Let’s take a look at a hypothetical design using something like one of the new SoC FPGAs. These devices combine programmable hardware (FPGA fabric) with conventional processing subsystems all on one chip. They could be used to implement an enormous variety of systems, but let’s pretend for a moment that we’re designing some kind of high-value communications hub for home or industry. It will have custom hardware in the FPGA fabric, embedded software (including operating systems) running on the applications processors, wired and wireless communications, third-party applications, and so forth. In other words, it will be a pretty typical modern electronic system.

So, who needs protecting? Well, in our example, Altera or Xilinx would like to be protected. They want to be sure that the system isn’t being manufactured with fraudulent parts, and that their design tools and IP are not being reverse-engineered. The makers of any third-party hardware IP included in the FPGA fabric would like to be protected. They want their IP to be used only where it’s appropriately licensed. The creators of the FPGA design want to be sure that nobody can snatch and reverse-engineer their hardware design, and the systems company wants to be sure that this is a real, non-cloned, non-overbuilt instance of their product. You know – one somebody actually paid them money for.

On the software side, there is a similar menagerie of protectees.  The developers of the software IP, the operating systems, and the applications want to be sure that only a licensed version of their code is being used, and that nobody is reverse-engineering or hacking their software. Network providers want to be sure that the system is a good citizen of their network, and that it isn’t used to defeat their moats and guard towers. The end user of the system also wants their data to be protected, and they hold the system company accountable for the integrity of their information as well. Sadly, this is just a partial list. And the list of potential bad guys is just as long.

Know what else? Some of the good guys are also bad guys.

Yep, it’s true. The world is not a melodrama, made up of mustache-twirling Snidely-Do-Wrongs and white-hat-wearing Goodstrong McCleans. It’s actually more of a dark comedy with highly conflicted characters who swerve back and forth across the axis of good and evil like drunks on the centerline of a dark winding highway. Sometimes, security is even needed to protect people from themselves. “Are you sure about that? It could be pretty bad. Press OK if you really want to.”

Security people use a number of models to approximate their view of the world. Usually, those models describe “zones of trust” which are little guarded islands with virtual guard booths and lists of who is allowed to do what inside. Unfortunately, this model overlays on top of the ad-hoc landscape we described before, so there are all kinds of crazy things going on in another dimension that the trust-zone guards can’t even see.

The stacks that comprise modern systems have grown thick, and, with various layers coming from different suppliers, it’s understandable that our overall security system comes out a looking a little like a clown car. We rely on every layer and every piece of the system to worry about its own security in its own way, and against its own set of anticipated villains.

But, is the ad-hoc approach that’s landed in our laps necessarily the best way? Or should we have some grand unified field theory of security that manages all aspects of our systems, securing everything with one giant industrial-grade lock? Certainly a consistent approach would improve the user experience. Having to memorize passwords for some parts, get license keys for others, and use biometric identification or other crazy crypto tricks for still others, all in the same system, may be a little more than off-putting for the typical user. A unified approach to security could make that situation noticeably better. And, a lot of us have a tendency to put a bank-grade vault door on the front while leaving a flapping screen on the side entrance. Unified security would help to eliminate that issue as well.

But a unified approach would also give us a single point of failure. If someone conquers the big thing, depending on their methods, they might have an easier time defeating the whole thing and having full run of the system. 

Most system engineers are not security experts, and that is a fundamental problem. There is a temptation to design our own security measures, but – lacking specific expertise, our efforts are often naive and inadequate On the other hand, if we employ third-party solutions, we run the risk of a common attack taking us out along with the rest of the third-party’s clients. Bad guys like to go for the hack that gives them maximum leverage, so if they can design an attack for a standardized security system, they’ve just broken into hundreds or thousands of systems instead of just one. 

The best approach is to really understand your particular system, what and whom you are trying to protect, the potential consequences of a successful attack, and the types of attackers who would benefit from breaking in. That will let you scope your security efforts and decide what level of engineering investment and user inconvenience is prudent in locking down your product. Then, you should assess your own level of expertise in implementing the required measures. Don’t be afraid to bring in help if you need it, or to adopt third-party solutions if they’re appropriate. Chances are, your product is not defined by its security level, and you should spend the majority of your creative energy on the features that truly differentiate it for your customers. 

2 thoughts on “Ad-Hocurity”

  1. It comes down to responsibly taking risks and doing due diligence for mitigating and evaluating those risks.

    When dealing with other valuable items, there are well established vetting processes, and more importantly required bonds, for doing business.

    Surety bonds are frequently the basic necessary part of getting a customer to trust you, as it means that your business has been carefully vetted by the bond holder. The customer knows that if your business fails to perform, that the bond issuer will cover the losses.

    Fidelity bonds are frequently required to cover risky losses by rogue employees. Again the bond issuer will thoroughly vet both the employer and the employee before issuing a Fidelity bond.Thus the customer knows they are covered if the bond is breached, and that the bond issuer vetting process mitigates risks for the customer.

    Lastly License and Permit bonds are frequently required to make sure the companies and people you are doing business with, are actually following required regulations and best practices for the industry. Again customers can trust the bond issuer for doing more than basic due diligence in vetting those covered by the bond.

    All of this needs to be applied to the cyber security industry, and needs to be a critical part of vendor selection and contractor selection. If a vendor can not back up their snake oil sales pitches with a surety bond it’s time to find a more reputable vendor that will. If you allow a contractor into your business, that will handle valuable inside knowledge, trade secrets, and business critical IP without their providing one or more bonds to protect you, then you have made a poor choice of contractors.

    Think this is not necessary? Consider bonding like this is absolutely necessary in the financial world … and if your companies IP is of equal value, then you MUST take the same necessary steps to mitigate risks and do the REQUIRED due diligence.

    It’s probably time US regulators start looking past financial crimes, and require similar oversight to cyber security and IP crimes.

    http://www.fdic.gov/regulations/safety/manual/section4-4.pdf

  2. Late comment here — you managed to publish this important line of thinking during my paternity leave~

    You of course nail the business issue in your very last sentence: companies try to identify the users that will ‘pay for’ security, and security just doesn’t usually make the top-three discriminating features list in most major systems.

    A thousand different articles, however, say that this requirement rack-and-stack will change with cloud computing and IOT. I think they are right at least with respect to cloud processing and data.

    Open Standards and Interfaces are the nearest tool we have to adopting any unified approaches to security, but those have the same business challenges.

Leave a Reply

featured blogs
Apr 25, 2024
Cadence's seven -year partnership with'¯ Team4Tech '¯has given our employees unique opportunities to harness the power of technology and engage in a three -month philanthropic project to improve the livelihood of communities in need. In Fall 2023, this partnership allowed C...
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

IoT Data Analysis at the Edge
No longer is machine learning a niche application for electronic engineering. Machine learning is leading a transformative revolution in a variety of electronic designs but implementing machine learning can be a tricky task to complete. In this episode of Chalk Talk, Amelia Dalton and Louis Gobin from STMicroelectronics investigate how STMicroelectronics is helping embedded developers design edge AI solutions. They take a closer look at the benefits of STMicroelectronics NanoEdge-AI® Studio and  STM32Cube.AI and how you can take advantage of them in your next design. 
Jun 28, 2023
34,417 views