feature article
Subscribe Now

Obscurity and the Illusion of Security

Eric Raymond, prominent voice in the open-source movement and author of The Cathedral and the Bazaar, stated it well: “Any security software design that doesn’t assume the enemy possesses the source code is already untrustworthy.” Decades earlier, Claude Shannon was even more succinct: “The enemy knows the system.” Security experts call this Kerckhoffs’ Principle, in honor of a 19th century mathematician who first formulated it for cryptosystems. The underlying assumption is that any security-critical flaw will be found and exploited sooner or later, so at best, secrecy buys you only some delay.

Despite all this, there’s a widespread yet mistaken belief that security of a system requires that the code is secret or obfuscated, an approach summarized as “security by obscurity.” Here are a few recent examples.

Open-Source vs. Closed-Source Code

One data point comes from a recent ruling of the Federal Communications Commission (FCC), the agency responsible for certifying that wireless communication devices won’t interfere with networks. The FCC stated, “A system that is wholly dependent on open-source elements will have a high burden to demonstrate that it is sufficiently secure to warrant authorization as software-defined radio.” Why is the bar higher for open-source than for closed-source code? Presumably because there is an expectation that even certified code contains plenty of security holes, and in open-source code these are easier to find and exploit than in closed-source code. This is classical security by obscurity. I hope it makes you feel all warm, fuzzy and secure. It doesn’t do it for me, though.

National Security by Obscurity

The second example is more worrying. Green Hills Software is a leading provider of operating systems for the aerospace and defense markets. In a white paper from CEO and founder Dan O’Dowd and published on the company’s web site, we find this intriguing statement: “Publishing the source code for the operating systems used in our most critical defense systems is analogous to publishing the wiring diagrams for our military base security systems. […] Unless an operating system has no vulnerabilities, publishing its source code is sure to reduce security.” Note that the company’s Integrity operating system has just obtained Common Criteria EAL6+ security certification, the highest ever achieved for a general-purpose OS, leading some to claim (incorrectly) that Integrity is “provably secure.” So, if it’s so secure, why does it depend on security by obscurity? Apparently, O’Dowd doesn’t believe it’s secure enough to publish the source code.

Do we really feel comfortable basing national security on obscurity? Kerckhoff and Shannon weren’t, and neither am I.

Assembly Coded or C-Coded

The third example is actually quite entertaining. Trango Virtual Processors, a provider of virtualization software for embedded systems (and just acquired by VMware), uses the tagline, “The Secure Virtual Processor.” We learn more about their idea of security from an FAQ on their web site, which states, “[The] hypervisor is small, and written in assembly language. […] As an assembly-coded product it is also much more difficult for hackers to decipher than C-coded products.” Again, classical security by obscurity.

But what can we really learn from that statement? A hacker wouldn’t actually have access to the assembler source, only the binary, as Trango keeps the source secret (they actually keep the documentation secret, too). Why should the binary code generated from assembly source be any more obscure than the binaries generated by a C compiler? Compiler output can be highly structured, although much of that structure gets lost when optimization is turned on. The structure of assembly code depends on — how shall I put this? — how competently it was written. So if the assembly code is more obscure than the (optimized) compiler output, it must be an unintelligible pile of spaghetti. Does that make you feel that you can trust it? I would think the exact opposite!

Getting Real about Security

While some of this may be amusing, security in general is a serious issue – too serious to be left to amateurish obfuscation and secrecy that’s only a third-class substitute. It’s time that we got serious about security, rather than admit failure and hide behind obscurity. We obviously need systems that are designed for security. These exist, with Integrity being one of them. But we need more: we need proof that they really are secure. This includes proof that the implementation of the design is correct.

No system to date has such a proof, and the conservative assumption must therefore be that all systems are insecure. But we really can do better. The L4.verified research project at NICTA is showing us how. It is performing a complete formal verification of seL4, a version of the L4 microkernel. In other words, the researchers are developing a mathematical proof that the system’s implementation (in C and assembler code) has the required security properties for which the system was designed. Simply stated: proof that the system is free from security-relevant bugs.

A year ago, that proof covered an executable model that serves as a low-level design of the kernel. This already made it the most deeply formally analyzed operating system ever. The researchers expect to complete the proof covering the implementation within the next couple of months. This will establish it as the first really secure system.

Obscurity provides no security, only an illusion of it. Let’s get real security.

Leave a Reply

featured blogs
Apr 19, 2021
Cache coherency is not a new concept. Coherent architectures have existed for many generations of CPU and Interconnect designs. Verifying adherence to coherency rules in SoCs has always been one of... [[ Click on the title to access the full blog on the Cadence Community sit...
Apr 19, 2021
Samtec blog readers are used to hearing about high-performance design. However, we see an increase in intertest in power integrity (PI). PI grows more crucial with each design iteration, yet many engineers are just starting to understand PI. That raises an interesting questio...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

Featured Chalk Talk

Cloud Computing for Electronic Design (Are We There Yet?)

Sponsored by Cadence Design Systems

When your project is at crunch time, a shortage of server capacity can bring your schedule to a crawl. But, the rest of the year, having a bunch of extra servers sitting around idle can be extremely expensive. Cloud-based EDA lets you have exactly the compute resources you need, when you need them. In this episode of Chalk Talk, Amelia Dalton chats with Craig Johnson of Cadence Design Systems about Cadence’s cloud-based EDA solutions.

More information about the Cadence Cloud Portfolio