editor's blog
Subscribe Now

Cache Clunker

We all know that server computing and embedded computing are different for lots of reasons, many of which can be summed up in one word: budget. Memory budget, performance budget, cost budget, etc. In other words, embedded systems have tight budgets, servers (and desktops and laptops) don’t. Much.

But occasionally something whacks you across the head in terms of the difference. Not long ago, there was an announcement out of MIT about a way to disguise memory access patterns for security. The deal is that, even when you’ve taken careful measures to encrypt data and otherwise keep prying eyes from snooping what’s being computed, those eyes can still monitor your memory accesses and learn too much. Hard to imagine, but apparently people way smarter than I am have shown this.

To be clear, this is positioned as an issue for cloud computing, where a set of resources may be handling delicate computing for many unrelated customers; because they share processors or memory or something, there’s the potential for, well, let’s say peeking at your neighbor’s exam paper.

Fundamental to this is the idea that multiple unrelated processes are running, something that doesn’t generally apply to embedded systems. However, if monitoring memory accesses can give away secrets, then that could still occur in an embedded device – its’ just that it might be someone breaking the thing apart and probing rather than some other co-resident process.

The solution proposed was to put an intervening piece of hardware in the memory access path. Each accessed memory address is placed in a tree, and when that memory is accessed, every address along its branch of the tree is accessed also. Then, after that, the address you really wanted is randomly swapped with some other address in the tree so that, the next time you access it, you’re traversing a totally different set of addresses.

In this manner, you’re completely shuffling your memory access patterns, making it hard (impossible?) to tell what’s going on if all you are watching is memory access.

The reason this smacked me on the head was thinking about the amount of work embedded designers go through to align memory carefully so that it packs nicely into the cache, minimizing misses and getting the most bang for each access. I struggle to think what this secure process (called “Ascend”) would do to a well-behaved cache.

Oh, and the other clue that we’re not thinking “embedded” with this? The performance hit is “only” 3-4X. To be fair, this is contrasted with other security ideas that apparently placed a 100X burden on access performance, and there’s no doubt that 3-4 is better than 100. But some embedded designers would give their left… eyeball to pick up 30-50% performance with one step.

I don’t know if there’s any way to map this idea into something more embedded-friendly; it’s intellectually interesting, and I’m not scoffing at its potential in the cloud, but unless I’m missing something (and comment below if I am), I’m not expecting this to come to a printer near me anytime soon. (And, again, to be fair, no one has suggested it should.)

You can find more in their release.

7 thoughts on “Cache Clunker”

  1. Pingback: 123movies
  2. Pingback: jogos friv
  3. Pingback: DMPK Testing
  4. Pingback: iraqi geometry

Leave a Reply

featured blogs
Oct 17, 2024
One of the most famous mechanical calculators of all time is the Curta, which was invented by Curt Herzstark (1902-1988)....

featured chalk talk

Reliability: Basics & Grades
Reliability is cornerstone to all electronic designs today, but how reliability is implemented and determined can vary widely by different market segments. In this episode of Chalk Talk, Amelia Dalton and Sam Accardo from the YAGEO Group explore the definition of reliability for electronic components, investigate the different grades of reliability offered by the YAGEO Group and the various steps that the YAGEO Group is taking to ensure the greatest reliability of their components.
Aug 15, 2024
43,432 views