editor's blog
Subscribe Now

Cache Clunker

We all know that server computing and embedded computing are different for lots of reasons, many of which can be summed up in one word: budget. Memory budget, performance budget, cost budget, etc. In other words, embedded systems have tight budgets, servers (and desktops and laptops) don’t. Much.

But occasionally something whacks you across the head in terms of the difference. Not long ago, there was an announcement out of MIT about a way to disguise memory access patterns for security. The deal is that, even when you’ve taken careful measures to encrypt data and otherwise keep prying eyes from snooping what’s being computed, those eyes can still monitor your memory accesses and learn too much. Hard to imagine, but apparently people way smarter than I am have shown this.

To be clear, this is positioned as an issue for cloud computing, where a set of resources may be handling delicate computing for many unrelated customers; because they share processors or memory or something, there’s the potential for, well, let’s say peeking at your neighbor’s exam paper.

Fundamental to this is the idea that multiple unrelated processes are running, something that doesn’t generally apply to embedded systems. However, if monitoring memory accesses can give away secrets, then that could still occur in an embedded device – its’ just that it might be someone breaking the thing apart and probing rather than some other co-resident process.

The solution proposed was to put an intervening piece of hardware in the memory access path. Each accessed memory address is placed in a tree, and when that memory is accessed, every address along its branch of the tree is accessed also. Then, after that, the address you really wanted is randomly swapped with some other address in the tree so that, the next time you access it, you’re traversing a totally different set of addresses.

In this manner, you’re completely shuffling your memory access patterns, making it hard (impossible?) to tell what’s going on if all you are watching is memory access.

The reason this smacked me on the head was thinking about the amount of work embedded designers go through to align memory carefully so that it packs nicely into the cache, minimizing misses and getting the most bang for each access. I struggle to think what this secure process (called “Ascend”) would do to a well-behaved cache.

Oh, and the other clue that we’re not thinking “embedded” with this? The performance hit is “only” 3-4X. To be fair, this is contrasted with other security ideas that apparently placed a 100X burden on access performance, and there’s no doubt that 3-4 is better than 100. But some embedded designers would give their left… eyeball to pick up 30-50% performance with one step.

I don’t know if there’s any way to map this idea into something more embedded-friendly; it’s intellectually interesting, and I’m not scoffing at its potential in the cloud, but unless I’m missing something (and comment below if I am), I’m not expecting this to come to a printer near me anytime soon. (And, again, to be fair, no one has suggested it should.)

You can find more in their release.

7 thoughts on “Cache Clunker”

  1. Pingback: 123movies
  2. Pingback: jogos friv
  3. Pingback: DMPK Testing
  4. Pingback: iraqi geometry

Leave a Reply

featured blogs
Apr 19, 2021
Cache coherency is not a new concept. Coherent architectures have existed for many generations of CPU and Interconnect designs. Verifying adherence to coherency rules in SoCs has always been one of... [[ Click on the title to access the full blog on the Cadence Community sit...
Apr 19, 2021
Samtec blog readers are used to hearing about high-performance design. However, we see an increase in intertest in power integrity (PI). PI grows more crucial with each design iteration, yet many engineers are just starting to understand PI. That raises an interesting questio...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Understanding the Foundations of Quiescent Current in Linear Power Systems

Sponsored by Texas Instruments

Minimizing power consumption is an important design consideration, especially in battery-powered systems that utilize linear regulators or low-dropout regulators (LDOs). Read this new whitepaper to learn the fundamentals of IQ in linear-power systems, how to predict behavior in dropout conditions, and maintain minimal disturbance during the load transient response.

Click here to download the whitepaper

featured chalk talk

PCI Express: An Interconnect Perspective

Sponsored by Samtec

New advances in PCIe demand new connectors, and the challenge of maintaining signal integrity has only gotten tougher. In this episode of Chalk Talk, Amelia Dalton chats with Matthew Burns of Samtec about why PCIe isn’t just for PCs, what PCIe over fiber looks like, and how Samtec’s team of signal integrity experts can help with your next PCIe design.

Click here for more information about PCI Express® Solutions from Samtec