feature article
Subscribe Now

Monstrous Memory

Altera SiP Brings the Bandwidth

Altera quietly announced their new “Stratix 10 DRAM SiP” recently, and the headline almost slipped in below our radar. After all, we’ve been hearing about “Stratix 10” for a while now. We have written about the impressive capabilities it will bring to the table, thanks to substantial architectural advances by Altera and the promise of Intel’s 14nm Tri-Gate (Intel’s term for FinFET) technology. And, “DRAM” is about as exciting as reruns of old Super Bowls. 

So, the only thing to catch much attention was the little “SiP” part tacked onto the end of the title, and even that is a concept we’ve covered in quite a bit of detail.

But, it turns out that this is the THING. This is the very convergence that we’ve speculated about for years. This is the place where new advanced 3D packaging technology meets cutting-edge FPGA architecture meets FinFET process advantages meets insane new memory standards. This is where all of those things come together to make something that’s bigger than the sum of the parts and that can deliver capabilities we previously only dreamed about. (That is, if you’re the kind of person who “dreams” about ultra-high memory bandwidth with unbelievably tiny power consumption.) 

Here’s the deal. Altera will have a family of their upcoming Stratix 10 line that will include HBM DRAM from SK Hynix in the package – connected to the FPGA with Intel’s proprietary EMIB technology. The result will be up to 10x the memory bandwidth available by connecting your FPGA to off-chip memory. There. That’s the whole thing. Nothing else to see here.

…except for the implications, of course. 

FPGAs, particularly the new FinFET-having generations we will soon see, bring enormous processing power to the table. But processing power is useful only in combination with a proportional amount of memory bandwidth, and memory bandwidth (rather than processing power) is likely to be the real bottleneck for many high-end applications. It doesn’t matter how fast you can crunch the data if you’ve got no place to store it. 

What we need is a memory standard capable of the stratospheric bandwidths demanded by these next-generation chips, and DDR4 (which was released way back in 2014) is obviously getting a little long in the tooth. Luckily, a couple of other standards – Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) have arrived just in time. Both of the latter standards rely on advanced 3D packaging techniques, as they’re not designed for DDR-style PCB interconnect.

Of course, both Altera and Xilinx have been working on advanced packaging techniques for a long time, and Xilinx has been shipping interposer-based stacked-silicon interconnect devices for years now. Altera’s partnership with Intel has given them access to what Intel calls “Embedded Multi-die Interconnect Bridge” (EMIB) technology. Rather than a silicon interposer forming the entire substrate onto which die are affixed using through-silicon vias (TSVs), EMIB uses smaller chip-to-chip bridges embedded into the package substrate. Rather than TSVs, EMIB connects to the die via microbumps for high-density signals and flip-chip bumps for larger (power/ground-type) connections. This means that TSVs are not required for EMIB, and that’s a pretty big deal. Further, since the EMIB doesn’t have to cover the entire area under the die (as an interposer does), the EMIB-based design is not limited to the size of a silicon interposer.

In the case of Stratix 10, connecting the HBM2 DRAM via EMIB accomplishes more than just a highly obfuscated wall of acronyms. EMIB offers very short connections, even when compared with an interposer. Shorter connections means lower capacitance, higher performance, and less power. The HMB2 DRAM chips themselves are 8Gb each, and they can be stacked up to 8 high, yielding an 8GB 256GB/s lane. Combine four of these stacks and you get 32GB of HBM2 DRAM with a crazy 1TB/s aggregate memory bandwidth connected to your FPGA. 

The benefits don’t stop there, of course. If you were previously using the multi-gigabit transceivers to connect your FPGA to – say, Hybrid Memory Cube (HMC) memory – you now get all those transceivers back to use for something else such as, say, getting data into and out of your FPGA. And, you get back all the power that those transceivers were using to push and pull all that data through those IO pins and your PCB.

So, let’s review. Insane bandwidth – check. Giant “in-package” memory – check. Dramatically lower power – check. Getting your FPGA transceivers and package pins back – check. Not having to route those memory interfaces through your board – check. This won’t be the cheapest way to add memory to your system, but if you are doing applications like data center acceleration, radar, high-performance computing, 8K video, high-speed networking (you know, the stuff most of us buy high-end FPGAs for in the first place), the performance you’ll be able to get with this integrated memory will be more than worth the price premium. In fact, it will probably be the difference in being able to do your application at all. 

This is the kind of thing we imagined when FPGAs started to roll out with new advanced packaging technology, and we’re excited to see more. Altera says that we’ll also see an arrangement similar to what we’ve seen from Xilinx – with transceivers fabricated with a different process mated to FPGAs. In the future, we might also expect silicon photonics to be integrated in a similar manner. 

This idea of separating the base FPGA die from the more transient parts of the design – memory, transceivers, and perhaps even things like processors, should give the new FPGA families more life and more versatility. It means that the FPGA companies won’t have to tape out a new FPGA just to give a different mix of these more esoteric items, and they’ll be able to rapidly adapt to new technology in these areas without spinning a new FPGA. Given the stratospheric cost of a tapeout in 14nm, that could be a very good thing. 

It will also be interesting to watch how Intel’s EMIB plays out against the interposer-based solutions most of the industry is following right now. On the surface, EMIB appears to have some distinct advantages over silicon interposers: potentially larger integration area, easier fabrication (no TSVs required), and shorter, faster connections. However, time will tell how the real strengths and weaknesses stack up.

11 thoughts on “Monstrous Memory”

  1. Pingback: from this source
  2. Pingback: TS Escorts
  3. Pingback: orospu cocuklari
  4. Pingback: Kisah Judi Online
  5. Pingback: DMPK
  6. Pingback: friv
  7. Pingback: satta matka

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

Miniaturization Impact on Automotive Products
Sponsored by Mouser Electronics and Molex
In this episode of Chalk Talk, Amelia Dalton and Kirk Ulery from Molex explore the role that miniaturization plays in automotive design innovation. They examine the transformational trends that are leading to smaller and smaller components in automotive designs and how the right connector can make all the difference in your next automotive design.
Sep 25, 2023
36,002 views