feature article
Subscribe Now

The Persistence of Memory

Performance-IP’s MRO Speeds up Slow Memories

“If you optimize everything, you will always be unhappy.” — Donald Knuth

Q: When is a cache not a cache?
A: When it’s a Memory Request Optimizer.

If that sounds tautological (aren’t all caches memory-request optimizers?), then you haven’t talked to Performance-IP, a small startup in the Boston area. P-IP has a patent-pending way to speed up your system’s slow accesses to external memory by interposing some clever logic of its own.

The company’s MRO (memory request optimizer) sits between your system bus and your memory controller – like a cache. But it’s not a cache. It monitors requests for external memory reads and supplies data from its own internal storage. But it isn’t a cache. It’s smart about how, when, and where your system is accessing external memory, so it can cut latency by huge amounts, but without being a cache. Its benefits are measurable but also somewhat unpredictable. But it’s still not a cache.

The MRO logic doesn’t have traditional cache tags, so it’s not technically a cache. Instead, it has “trackers,” which serve a similar purpose but in a different manner. You can configure the number of trackers in your implementation of the MRO (it’s supplied as Verilog), so you can tune the number of trackers to balance performance against area and power. As a rule of thumb, you’ll want about 10–20 trackers, although some benchmarks show marked improvement with only four.

The MRO does store data locally, like a cache, and that’s one source of its performance-enhancement capabilities. Its local storage (P-IP calls them response buffers) is undoubtedly faster than your external RAM, so any read “hit” is a performance win.

But its trackers are also proactive, and they will prefetch data based on what they observe about your code’s locality of reference. If its internal statistic-gathering mechanism suggests that you’re accessing a certain range of addresses linearly, it’ll prefetch the upcoming data for you and store it in its response buffer. If all goes according to plan, you’ll be able to skip a couple of external memory reads entirely.

It’s this proactive prefetching that is the other source of MRO’s performance. Unlike a memory scheduler, the MRO doesn’t ever rearrange or reorganize memory accesses. Nothing ever gets delayed, or hoisted up to the front of the queue. Instead, it attempts to apply some rationality to your system’s scattered memory accesses, looking for locality where the compiler couldn’t find any. This is particularly fruitful in multicore and multi-threaded systems where each thread might be perfectly linear, but the combination of all threads/cores together makes for a haphazard melee for memory. MRO tries to stand above the fray, looking for overall patterns that can be exploited for gain.

Naturally, the slower your memory is, the better the MRO works. Or, more accurately, the greater the disparity between your processors’ performance and your memory’s performance, the greater the benefit. Not unlike a cache.

Once you’ve simulated, configured, and installed your MRO, you still have some run-time options available to you. It has three speeds: low, medium, and high (as well as “off”). The distinction is how aggressively the MRO will prefetch data that it thinks you might want. Set the mode too aggressively and you might generate more false fetches than you would see at a lower setting. It’s hard to predict which setting will work best with what software – which is why it’s programmable. Apart from these configuration settings, the MRO is entirely invisible to software. Sort of like a cache.

Performance-IP has lots of benchmark results on its website to show how MRO performs in various modes, with various test suites and various memory speeds. With things configured just right, they’ve seen 88% reductions in memory latency and 50% improvements in CPU performance.

The company doesn’t charge royalties for licensing MRO – just a single up-front licensing fee, with free support. It’s a pretty good deal, if you’ve got the cash.

Leave a Reply

featured blogs
Dec 3, 2021
Hard to believe it's already December and 11/12ths of a year's worth of CFD is behind us. And with the holidays looming, it's uncertain how many more editions of This Week in CFD are... [[ Click on the title to access the full blog on the Cadence Community sit...
Dec 3, 2021
Explore automotive cybersecurity standards, news, and best practices through blog posts from our experts on connected vehicles, automotive SoCs, and more. The post How Do You Stay Ahead of Hackers and Build State-of-the-Art Automotive Cybersecurity? appeared first on From Si...
Dec 3, 2021
Believe it or not, I ran into John (he told me I could call him that) at a small café just a couple of evenings ago as I pen these words....
Nov 8, 2021
Intel® FPGA Technology Day (IFTD) is a free four-day event that will be hosted virtually across the globe in North America, China, Japan, EMEA, and Asia Pacific from December 6-9, 2021. The theme of IFTD 2021 is 'Accelerating a Smart and Connected World.' This virtual event ...

featured video

Design Low-Energy Audio/Voice Capability for Hearables, Wearables & Always-On Devices

Sponsored by Cadence Design Systems

Designing an always-on system that needs to conserve battery life? Need to also include hands-free voice control for your users? Watch this video to learn how you can reduce the energy consumption of devices with small batteries and provide a solution for a greener world with the Cadence® Tensilica® HiFi 1 DSP family.

More information about Cadence® Tensilica® HiFi 1 DSP family

featured paper

4 questions to ask before choosing a Wi-SUN stack

Sponsored by Texas Instruments

Scalability, reliability, security, and speed—these are the advantages that the Wireless Smart Ubiquitous Network (Wi-SUN®) offers to smart cities and the Internet of Things. But as a developer, how can you maximize these advantages in your software design? In this article, TI addresses four questions to help you save development cost and get to market faster with a more streamlined design cycle for your IoT application.

Click to read more

featured chalk talk

Introducing Vivado ML Editions

Sponsored by Xilinx

There are many ways that machine learning can help improve our IC designs, but when it comes to quality of results and design iteration - it’s a game changer. In this episode of Chalk Talk, Amelia Dalton chats with Nick Ni from Xilinx about the benefits of machine learning design optimization, what hierarchical module-based compilation brings to the table, and why extending a module design into an end-to-end flow can make all the difference in your next IC design.

Click here for more information about Vivado ML