feature article
Subscribe to EE Journal Daily Newsletter

Monstrous Memory

Altera SiP Brings the Bandwidth

Altera quietly announced their new “Stratix 10 DRAM SiP” recently, and the headline almost slipped in below our radar. After all, we’ve been hearing about “Stratix 10” for a while now. We have written about the impressive capabilities it will bring to the table, thanks to substantial architectural advances by Altera and the promise of Intel’s 14nm Tri-Gate (Intel’s term for FinFET) technology. And, “DRAM” is about as exciting as reruns of old Super Bowls. 

So, the only thing to catch much attention was the little “SiP” part tacked onto the end of the title, and even that is a concept we’ve covered in quite a bit of detail.

But, it turns out that this is the THING. This is the very convergence that we’ve speculated about for years. This is the place where new advanced 3D packaging technology meets cutting-edge FPGA architecture meets FinFET process advantages meets insane new memory standards. This is where all of those things come together to make something that’s bigger than the sum of the parts and that can deliver capabilities we previously only dreamed about. (That is, if you’re the kind of person who “dreams” about ultra-high memory bandwidth with unbelievably tiny power consumption.) 

Here’s the deal. Altera will have a family of their upcoming Stratix 10 line that will include HBM DRAM from SK Hynix in the package – connected to the FPGA with Intel’s proprietary EMIB technology. The result will be up to 10x the memory bandwidth available by connecting your FPGA to off-chip memory. There. That’s the whole thing. Nothing else to see here.

…except for the implications, of course. 

FPGAs, particularly the new FinFET-having generations we will soon see, bring enormous processing power to the table. But processing power is useful only in combination with a proportional amount of memory bandwidth, and memory bandwidth (rather than processing power) is likely to be the real bottleneck for many high-end applications. It doesn’t matter how fast you can crunch the data if you’ve got no place to store it. 

What we need is a memory standard capable of the stratospheric bandwidths demanded by these next-generation chips, and DDR4 (which was released way back in 2014) is obviously getting a little long in the tooth. Luckily, a couple of other standards – Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) have arrived just in time. Both of the latter standards rely on advanced 3D packaging techniques, as they’re not designed for DDR-style PCB interconnect.

Of course, both Altera and Xilinx have been working on advanced packaging techniques for a long time, and Xilinx has been shipping interposer-based stacked-silicon interconnect devices for years now. Altera’s partnership with Intel has given them access to what Intel calls “Embedded Multi-die Interconnect Bridge” (EMIB) technology. Rather than a silicon interposer forming the entire substrate onto which die are affixed using through-silicon vias (TSVs), EMIB uses smaller chip-to-chip bridges embedded into the package substrate. Rather than TSVs, EMIB connects to the die via microbumps for high-density signals and flip-chip bumps for larger (power/ground-type) connections. This means that TSVs are not required for EMIB, and that’s a pretty big deal. Further, since the EMIB doesn’t have to cover the entire area under the die (as an interposer does), the EMIB-based design is not limited to the size of a silicon interposer.

In the case of Stratix 10, connecting the HBM2 DRAM via EMIB accomplishes more than just a highly obfuscated wall of acronyms. EMIB offers very short connections, even when compared with an interposer. Shorter connections means lower capacitance, higher performance, and less power. The HMB2 DRAM chips themselves are 8Gb each, and they can be stacked up to 8 high, yielding an 8GB 256GB/s lane. Combine four of these stacks and you get 32GB of HBM2 DRAM with a crazy 1TB/s aggregate memory bandwidth connected to your FPGA. 

The benefits don’t stop there, of course. If you were previously using the multi-gigabit transceivers to connect your FPGA to – say, Hybrid Memory Cube (HMC) memory – you now get all those transceivers back to use for something else such as, say, getting data into and out of your FPGA. And, you get back all the power that those transceivers were using to push and pull all that data through those IO pins and your PCB.

So, let’s review. Insane bandwidth – check. Giant “in-package” memory – check. Dramatically lower power – check. Getting your FPGA transceivers and package pins back – check. Not having to route those memory interfaces through your board – check. This won’t be the cheapest way to add memory to your system, but if you are doing applications like data center acceleration, radar, high-performance computing, 8K video, high-speed networking (you know, the stuff most of us buy high-end FPGAs for in the first place), the performance you’ll be able to get with this integrated memory will be more than worth the price premium. In fact, it will probably be the difference in being able to do your application at all. 

This is the kind of thing we imagined when FPGAs started to roll out with new advanced packaging technology, and we’re excited to see more. Altera says that we’ll also see an arrangement similar to what we’ve seen from Xilinx – with transceivers fabricated with a different process mated to FPGAs. In the future, we might also expect silicon photonics to be integrated in a similar manner. 

This idea of separating the base FPGA die from the more transient parts of the design – memory, transceivers, and perhaps even things like processors, should give the new FPGA families more life and more versatility. It means that the FPGA companies won’t have to tape out a new FPGA just to give a different mix of these more esoteric items, and they’ll be able to rapidly adapt to new technology in these areas without spinning a new FPGA. Given the stratospheric cost of a tapeout in 14nm, that could be a very good thing. 

It will also be interesting to watch how Intel’s EMIB plays out against the interposer-based solutions most of the industry is following right now. On the surface, EMIB appears to have some distinct advantages over silicon interposers: potentially larger integration area, easier fabrication (no TSVs required), and shorter, faster connections. However, time will tell how the real strengths and weaknesses stack up.

Leave a Reply

featured blogs
Jul 26, 2017
Ofer Ben Noon, CEO and co-founder of Argus Cybersecurity, had the job of raining on the parade at the Automotiv Elektronik Kongress in Ludwigsburg recently. On the day he was speaking, Europe was under attack, Ukraine was largely shut down, big-name shipping companies were sh...
Jul 25, 2017
One of the largest differentiators of Samtec.com is its incredible variety of technical documentation, the vast majority of which is available without even logging in. While this level of availability is great for users, because of the sheer amount of documentation available,...
Jun 20, 2017
For data-center architects it seems like a no-brainer. For a wide variety of applications, from the databases behind e-commerce platforms to the big-data tools in search engines to suddenly-fashionable data analytics to scientific codes, the dominant limitation on application...