feature article
Subscribe Now

Day of the DRAM

New DRAM Interfaces Keep Memory Buses Humming

A pair of new DRAM interfaces broke cover recently, and both promise to make engineers’ lives tougher – no, wait, easier! Sorry. Easier because the new interfaces make memory faster and more power-efficient (both good things), but tougher because it’ll be harder to decide which one you want. And they’re definitely mutually exclusive.

One interface comes from the Hybrid Memory Cube Consortium, a nonprofit group of DRAM makers and DRAM users (that’s a large group) that collectively work on defining how hybrid memory cubes should work. The other comes from Rambus, the decidedly for-profit company that makes its business developing and licensing interface-related IP.

If you’re not familiar with hybrid memory cubes, we’ve covered them before in these pages. It’s pretty much what it sounds like: a cube of DRAMs stacked up to be very dense. The idea behind cubes is pretty sound. Most projects use multiple memory chips, and they want them to take up less space. So why not stack all the DRAM silicon together and have them share a single interface? The result is a single mega-DRAM that’s a bit taller than normal but otherwise works like any DRAM. As a nice side effect, the storage cells – the actual memory part of the DRAM – can be fabricated in one type of silicon, while the single interface chip can be built using a different silicon process. That allows you to optimize the chemistry either for storage or for logic, without having to mix both, as most DRAMs do.

On the other hand, stacking your DRAMs to save space has an unwelcome side effect, too: you have fewer pins with which to connect your cube to the outside world. Just when the capacity goes up, the pin count goes down. Uh-oh. So the consortium has defined a narrow serial interface to get all that DRAM goodness into and out of the cube.

Two serial interfaces, actually: a short one and a really short one. The “short reach” (SR) interface is good for distances of about 8–10 inches, while the “ultra-short reach” (USR) interface covers only about 3 inches. Chips with the SR interface will show up first. In fact, Micron is expected to debut its hybrid memory cube in the second half of this year. It’ll have an SR interface and boast 160 Gbytes/sec bandwidth. USR-interface devices may come later, but nobody is saying how soon.

The only differences between the two cube interfaces are distance and power consumption. The logical protocol is the same either way, so the same memory controller can work with either, as long as it handles both voltage levels. The USR interface is intended for really small devices, such as smartphones, that need to pack a lot of memory into a small space while also conserving energy.

Energy conservation was also the impetus behind Rambus’s newest interface, which it sells under the broad “R+” branding umbrella. The interface itself doesn’t have a name; it’s simply the R+ version of the familiar DDR3 and LPDDR3 standards. The idea here is to keep the logical protocol of DDR3 (and low-power, or LPDDR3) while reducing the interface’s energy requirements. Rambus’s secret? Lower the voltage level.

Before you say, “Well, duh,” and point out that Ohm’s Law has been in effect for quite a while, Rambus is quick to emphasize that simply lowering the voltage (and thus, the power requirements) of the interface isn’t as simple as it sounds. There’s some fiddly physics involved, and the subtleties of the signal conditioning and other trickery is really what you’re paying for. Rambus is happy to teach you those secrets for a nominal license fee, plus royalties on your parts.

Whereas the hybrid-memory-cube people operate as a nonprofit consortium, Rambus is an IP-licensing firm, so the two groups approach interface standards from opposite ends of the spectrum. It’s not that the consortium members are all socialist do-gooders; it’s that they all make their money elsewhere. DRAM makers Micron and Hynix, for example, are both members of the consortium, and both firms make a good chunk of money selling DRAMs. Giving away the specification, and giving away the time it took to develop it, are simply good corporate investments. Similarly, IBM and Xilinx and the other companies in the consortium’s “inner circle” of voting members all have a vested interest in seeing this particular interface flourish. A rising tide lifts all boats, etc.

Rambus, in contrast, doesn’t really have any other line of business. Its whole raison d’être is to research, design, and license interfaces, including all the detailed engineering IP necessary to make said interfaces actually work in the real world. Clearly, that has value to engineers, or Rambus wouldn’t still be in business. So it’s apples and oranges, really.

The hybrid-memory club doesn’t really see itself competing with Rambus, or with the DDR3 or DDR4 interface standards. On purely commercial grounds, I agree. But technically, they are at odds with each other. You can’t use both, so eventually you’ll have to decide which interface you prefer, and that makes them mutually exclusive competitors. Rambus doesn’t care whose memory chips you buy, so long as you connect to them via their interface. The consortium also doesn’t care whose chips you buy; they just hope they’re compatible with their members’ hybrid cube standard.

I’m no memory-interface designer –I happily gave that up a while ago – so I’m on the fence about this one. I feel like the hybrid memory cube is the better route going forward, but also the riskier one. It’s got density on its side, and a space-efficient serial interface. But it also seems like a risky leap into a strange world of stacked dice and through-silicon interconnections. If you don’t look too closely at the insides of a hybrid memory cube, it’s unremarkable. But the science that goes on inside took a lot of people a lot of time to figure out. It’s not simple stuff. And it’s too early to know how memory-cube pricing compares to that of “normal” DRAMs. 

On the other hand, Rambus’s R+ update to the tried-and-true DDR3 interface is, well, tried and true. Everybody makes DDR3 chips, even if they don’t currently make the power-saving R+ versions. But at least the logic interface is the same. The R+ update affects only the physical drivers, not the protocol, so it’s easy to debug and there’s little new science involved. If you’re already a DDR3 user, this seems like the easier way to go.

Until somebody makes a spin-transfer, MRAM, or knob-and-tube memory cheap and reliable, we’ll still be fiddling with these DRAMs one way or the other. 

Leave a Reply

featured blogs
Jun 22, 2018
A myriad of mechanical and electrical specifications must be considered when selecting the best connector system for your design. An incomplete, first-pass list of considerations include the type of termination, available footprint space, processing and operating temperature...
Jun 22, 2018
You can't finish the board before the schematic, but you want it done pretty much right away, before marketing changes their minds again!...
Jun 22, 2018
Last time I worked for Cadence in the early 2000s, Adriaan Ligtenberg ran methodology services and, in particular, something we called Virtual CAD. The idea of Virtual CAD was to allow companies to outsource their CAD group to Cadence. In effect, we would be the CAD group for...
Jun 7, 2018
If integrating an embedded FPGA (eFPGA) into your ASIC or SoC design strikes you as odd, it shouldn'€™t. ICs have been absorbing almost every component on a circuit board for decades, starting with transistors, resistors, and capacitors '€” then progressing to gates, ALUs...
May 24, 2018
Amazon has apparently had an Echo hiccup of the sort that would give customers bad dreams. It sent a random conversation to a random contact. A couple had installed numerous Alexa-enabled devices in the home. At some point, they had a conversation '€“ as couples are wont to...