feature article
Subscribe Now

Day of the DRAM

New DRAM Interfaces Keep Memory Buses Humming

A pair of new DRAM interfaces broke cover recently, and both promise to make engineers’ lives tougher – no, wait, easier! Sorry. Easier because the new interfaces make memory faster and more power-efficient (both good things), but tougher because it’ll be harder to decide which one you want. And they’re definitely mutually exclusive.

One interface comes from the Hybrid Memory Cube Consortium, a nonprofit group of DRAM makers and DRAM users (that’s a large group) that collectively work on defining how hybrid memory cubes should work. The other comes from Rambus, the decidedly for-profit company that makes its business developing and licensing interface-related IP.

If you’re not familiar with hybrid memory cubes, we’ve covered them before in these pages. It’s pretty much what it sounds like: a cube of DRAMs stacked up to be very dense. The idea behind cubes is pretty sound. Most projects use multiple memory chips, and they want them to take up less space. So why not stack all the DRAM silicon together and have them share a single interface? The result is a single mega-DRAM that’s a bit taller than normal but otherwise works like any DRAM. As a nice side effect, the storage cells – the actual memory part of the DRAM – can be fabricated in one type of silicon, while the single interface chip can be built using a different silicon process. That allows you to optimize the chemistry either for storage or for logic, without having to mix both, as most DRAMs do.

On the other hand, stacking your DRAMs to save space has an unwelcome side effect, too: you have fewer pins with which to connect your cube to the outside world. Just when the capacity goes up, the pin count goes down. Uh-oh. So the consortium has defined a narrow serial interface to get all that DRAM goodness into and out of the cube.

Two serial interfaces, actually: a short one and a really short one. The “short reach” (SR) interface is good for distances of about 8–10 inches, while the “ultra-short reach” (USR) interface covers only about 3 inches. Chips with the SR interface will show up first. In fact, Micron is expected to debut its hybrid memory cube in the second half of this year. It’ll have an SR interface and boast 160 Gbytes/sec bandwidth. USR-interface devices may come later, but nobody is saying how soon.

The only differences between the two cube interfaces are distance and power consumption. The logical protocol is the same either way, so the same memory controller can work with either, as long as it handles both voltage levels. The USR interface is intended for really small devices, such as smartphones, that need to pack a lot of memory into a small space while also conserving energy.

Energy conservation was also the impetus behind Rambus’s newest interface, which it sells under the broad “R+” branding umbrella. The interface itself doesn’t have a name; it’s simply the R+ version of the familiar DDR3 and LPDDR3 standards. The idea here is to keep the logical protocol of DDR3 (and low-power, or LPDDR3) while reducing the interface’s energy requirements. Rambus’s secret? Lower the voltage level.

Before you say, “Well, duh,” and point out that Ohm’s Law has been in effect for quite a while, Rambus is quick to emphasize that simply lowering the voltage (and thus, the power requirements) of the interface isn’t as simple as it sounds. There’s some fiddly physics involved, and the subtleties of the signal conditioning and other trickery is really what you’re paying for. Rambus is happy to teach you those secrets for a nominal license fee, plus royalties on your parts.

Whereas the hybrid-memory-cube people operate as a nonprofit consortium, Rambus is an IP-licensing firm, so the two groups approach interface standards from opposite ends of the spectrum. It’s not that the consortium members are all socialist do-gooders; it’s that they all make their money elsewhere. DRAM makers Micron and Hynix, for example, are both members of the consortium, and both firms make a good chunk of money selling DRAMs. Giving away the specification, and giving away the time it took to develop it, are simply good corporate investments. Similarly, IBM and Xilinx and the other companies in the consortium’s “inner circle” of voting members all have a vested interest in seeing this particular interface flourish. A rising tide lifts all boats, etc.

Rambus, in contrast, doesn’t really have any other line of business. Its whole raison d’être is to research, design, and license interfaces, including all the detailed engineering IP necessary to make said interfaces actually work in the real world. Clearly, that has value to engineers, or Rambus wouldn’t still be in business. So it’s apples and oranges, really.

The hybrid-memory club doesn’t really see itself competing with Rambus, or with the DDR3 or DDR4 interface standards. On purely commercial grounds, I agree. But technically, they are at odds with each other. You can’t use both, so eventually you’ll have to decide which interface you prefer, and that makes them mutually exclusive competitors. Rambus doesn’t care whose memory chips you buy, so long as you connect to them via their interface. The consortium also doesn’t care whose chips you buy; they just hope they’re compatible with their members’ hybrid cube standard.

I’m no memory-interface designer –I happily gave that up a while ago – so I’m on the fence about this one. I feel like the hybrid memory cube is the better route going forward, but also the riskier one. It’s got density on its side, and a space-efficient serial interface. But it also seems like a risky leap into a strange world of stacked dice and through-silicon interconnections. If you don’t look too closely at the insides of a hybrid memory cube, it’s unremarkable. But the science that goes on inside took a lot of people a lot of time to figure out. It’s not simple stuff. And it’s too early to know how memory-cube pricing compares to that of “normal” DRAMs. 

On the other hand, Rambus’s R+ update to the tried-and-true DDR3 interface is, well, tried and true. Everybody makes DDR3 chips, even if they don’t currently make the power-saving R+ versions. But at least the logic interface is the same. The R+ update affects only the physical drivers, not the protocol, so it’s easy to debug and there’s little new science involved. If you’re already a DDR3 user, this seems like the easier way to go.

Until somebody makes a spin-transfer, MRAM, or knob-and-tube memory cheap and reliable, we’ll still be fiddling with these DRAMs one way or the other. 

Leave a Reply

featured blogs
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....
Apr 18, 2024
Analog Behavioral Modeling involves creating models that mimic a desired external circuit behavior at a block level rather than simply reproducing individual transistor characteristics. One of the significant benefits of using models is that they reduce the simulation time. V...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Package Evolution for MOSFETs and Diodes
Sponsored by Mouser Electronics and Vishay
A limiting factor for both MOSFETs and diodes is power dissipation per unit area and your choice of packaging can make a big difference in power dissipation. In this episode of Chalk Talk, Amelia Dalton and Brian Zachrel from Vishay investigate how package evolution has led to new advancements in diodes and MOSFETs including minimizing package resistance, increasing power density, and more! They also explore the benefits of using Vishay’s small and efficient PowerPAK® and eSMP® packages and the migration path you will need to keep in mind when using these solutions in your next design.
Jul 10, 2023
31,736 views