feature article
Subscribe Now

Monstrous Memory

Altera SiP Brings the Bandwidth

Altera quietly announced their new “Stratix 10 DRAM SiP” recently, and the headline almost slipped in below our radar. After all, we’ve been hearing about “Stratix 10” for a while now. We have written about the impressive capabilities it will bring to the table, thanks to substantial architectural advances by Altera and the promise of Intel’s 14nm Tri-Gate (Intel’s term for FinFET) technology. And, “DRAM” is about as exciting as reruns of old Super Bowls. 

So, the only thing to catch much attention was the little “SiP” part tacked onto the end of the title, and even that is a concept we’ve covered in quite a bit of detail.

But, it turns out that this is the THING. This is the very convergence that we’ve speculated about for years. This is the place where new advanced 3D packaging technology meets cutting-edge FPGA architecture meets FinFET process advantages meets insane new memory standards. This is where all of those things come together to make something that’s bigger than the sum of the parts and that can deliver capabilities we previously only dreamed about. (That is, if you’re the kind of person who “dreams” about ultra-high memory bandwidth with unbelievably tiny power consumption.) 

Here’s the deal. Altera will have a family of their upcoming Stratix 10 line that will include HBM DRAM from SK Hynix in the package – connected to the FPGA with Intel’s proprietary EMIB technology. The result will be up to 10x the memory bandwidth available by connecting your FPGA to off-chip memory. There. That’s the whole thing. Nothing else to see here.

…except for the implications, of course. 

FPGAs, particularly the new FinFET-having generations we will soon see, bring enormous processing power to the table. But processing power is useful only in combination with a proportional amount of memory bandwidth, and memory bandwidth (rather than processing power) is likely to be the real bottleneck for many high-end applications. It doesn’t matter how fast you can crunch the data if you’ve got no place to store it. 

What we need is a memory standard capable of the stratospheric bandwidths demanded by these next-generation chips, and DDR4 (which was released way back in 2014) is obviously getting a little long in the tooth. Luckily, a couple of other standards – Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) have arrived just in time. Both of the latter standards rely on advanced 3D packaging techniques, as they’re not designed for DDR-style PCB interconnect.

Of course, both Altera and Xilinx have been working on advanced packaging techniques for a long time, and Xilinx has been shipping interposer-based stacked-silicon interconnect devices for years now. Altera’s partnership with Intel has given them access to what Intel calls “Embedded Multi-die Interconnect Bridge” (EMIB) technology. Rather than a silicon interposer forming the entire substrate onto which die are affixed using through-silicon vias (TSVs), EMIB uses smaller chip-to-chip bridges embedded into the package substrate. Rather than TSVs, EMIB connects to the die via microbumps for high-density signals and flip-chip bumps for larger (power/ground-type) connections. This means that TSVs are not required for EMIB, and that’s a pretty big deal. Further, since the EMIB doesn’t have to cover the entire area under the die (as an interposer does), the EMIB-based design is not limited to the size of a silicon interposer.

In the case of Stratix 10, connecting the HBM2 DRAM via EMIB accomplishes more than just a highly obfuscated wall of acronyms. EMIB offers very short connections, even when compared with an interposer. Shorter connections means lower capacitance, higher performance, and less power. The HMB2 DRAM chips themselves are 8Gb each, and they can be stacked up to 8 high, yielding an 8GB 256GB/s lane. Combine four of these stacks and you get 32GB of HBM2 DRAM with a crazy 1TB/s aggregate memory bandwidth connected to your FPGA. 

The benefits don’t stop there, of course. If you were previously using the multi-gigabit transceivers to connect your FPGA to – say, Hybrid Memory Cube (HMC) memory – you now get all those transceivers back to use for something else such as, say, getting data into and out of your FPGA. And, you get back all the power that those transceivers were using to push and pull all that data through those IO pins and your PCB.

So, let’s review. Insane bandwidth – check. Giant “in-package” memory – check. Dramatically lower power – check. Getting your FPGA transceivers and package pins back – check. Not having to route those memory interfaces through your board – check. This won’t be the cheapest way to add memory to your system, but if you are doing applications like data center acceleration, radar, high-performance computing, 8K video, high-speed networking (you know, the stuff most of us buy high-end FPGAs for in the first place), the performance you’ll be able to get with this integrated memory will be more than worth the price premium. In fact, it will probably be the difference in being able to do your application at all. 

This is the kind of thing we imagined when FPGAs started to roll out with new advanced packaging technology, and we’re excited to see more. Altera says that we’ll also see an arrangement similar to what we’ve seen from Xilinx – with transceivers fabricated with a different process mated to FPGAs. In the future, we might also expect silicon photonics to be integrated in a similar manner. 

This idea of separating the base FPGA die from the more transient parts of the design – memory, transceivers, and perhaps even things like processors, should give the new FPGA families more life and more versatility. It means that the FPGA companies won’t have to tape out a new FPGA just to give a different mix of these more esoteric items, and they’ll be able to rapidly adapt to new technology in these areas without spinning a new FPGA. Given the stratospheric cost of a tapeout in 14nm, that could be a very good thing. 

It will also be interesting to watch how Intel’s EMIB plays out against the interposer-based solutions most of the industry is following right now. On the surface, EMIB appears to have some distinct advantages over silicon interposers: potentially larger integration area, easier fabrication (no TSVs required), and shorter, faster connections. However, time will tell how the real strengths and weaknesses stack up.

11 thoughts on “Monstrous Memory”

  1. Pingback: from this source
  2. Pingback: TS Escorts
  3. Pingback: orospu cocuklari
  4. Pingback: Kisah Judi Online
  5. Pingback: DMPK
  6. Pingback: friv
  7. Pingback: satta matka

Leave a Reply

featured blogs
Nov 25, 2020
It constantly amazes me how there are always multiple ways of doing things. The problem is that sometimes it'€™s hard to decide which option is best....
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...
Nov 25, 2020
It might seem simple, but database units and accuracy directly relate to the artwork generated, and it is possible to misunderstand the artwork format as it relates to the board setup. Thirty years... [[ Click on the title to access the full blog on the Cadence Community sit...
Nov 23, 2020
Readers of the Samtec blog know we are always talking about next-gen speed. Current channels rates are running at 56 Gbps PAM4. However, system designers are starting to look at 112 Gbps PAM4 data rates. Intuition would say that bleeding edge data rates like 112 Gbps PAM4 onl...

featured video

AI SoC Chats: Protecting Data with Security IP

Sponsored by Synopsys

Understand the threat profiles and security trends for AI SoC applications, including how laws and regulations are changing to protect the private information and data of users. Secure boot, secure debug, and secure communication for neural network engines is critical. Learn how DesignWare Security IP and Hardware Root of Trust can help designers create a secure enclave on the SoC and update software remotely.

Click here for more information about Security IP

Featured paper

Top 9 design questions about digital isolators

Sponsored by Texas Instruments

Looking for more information about digital isolators? We’re here to help. Based on TI E2E™ support forum feedback, we compiled a list of the most frequently asked questions about digital isolator design challenges. This article covers questions such as, “What is the logic state of a digital isolator with no input signal?”, and “Can you leave unused channel pins on a digital isolator floating?”

Click here to download the whitepaper

Featured Chalk Talk

Bulk Acoustic Wave (BAW) Technology

Sponsored by Mouser Electronics and Texas Instruments

In industrial applications, crystals are not ideal for generating clock signal timing. They take up valuable PCB real-estate, and aren’t stable in harsh thermal and vibration environments. In this episode of Chalk Talk, Amelia Dalton chats with Nick Smith from Texas Instruments about bulk acoustic wave (BAW) technology that offers an attractive alternative to crystals.

More information about Texas Instruments Bulk Acoustic Wave (BAW) Technology