feature article
Subscribe Now

Monstrous Memory

Altera SiP Brings the Bandwidth

Altera quietly announced their new “Stratix 10 DRAM SiP” recently, and the headline almost slipped in below our radar. After all, we’ve been hearing about “Stratix 10” for a while now. We have written about the impressive capabilities it will bring to the table, thanks to substantial architectural advances by Altera and the promise of Intel’s 14nm Tri-Gate (Intel’s term for FinFET) technology. And, “DRAM” is about as exciting as reruns of old Super Bowls. 

So, the only thing to catch much attention was the little “SiP” part tacked onto the end of the title, and even that is a concept we’ve covered in quite a bit of detail.

But, it turns out that this is the THING. This is the very convergence that we’ve speculated about for years. This is the place where new advanced 3D packaging technology meets cutting-edge FPGA architecture meets FinFET process advantages meets insane new memory standards. This is where all of those things come together to make something that’s bigger than the sum of the parts and that can deliver capabilities we previously only dreamed about. (That is, if you’re the kind of person who “dreams” about ultra-high memory bandwidth with unbelievably tiny power consumption.) 

Here’s the deal. Altera will have a family of their upcoming Stratix 10 line that will include HBM DRAM from SK Hynix in the package – connected to the FPGA with Intel’s proprietary EMIB technology. The result will be up to 10x the memory bandwidth available by connecting your FPGA to off-chip memory. There. That’s the whole thing. Nothing else to see here.

…except for the implications, of course. 

FPGAs, particularly the new FinFET-having generations we will soon see, bring enormous processing power to the table. But processing power is useful only in combination with a proportional amount of memory bandwidth, and memory bandwidth (rather than processing power) is likely to be the real bottleneck for many high-end applications. It doesn’t matter how fast you can crunch the data if you’ve got no place to store it. 

What we need is a memory standard capable of the stratospheric bandwidths demanded by these next-generation chips, and DDR4 (which was released way back in 2014) is obviously getting a little long in the tooth. Luckily, a couple of other standards – Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) have arrived just in time. Both of the latter standards rely on advanced 3D packaging techniques, as they’re not designed for DDR-style PCB interconnect.

Of course, both Altera and Xilinx have been working on advanced packaging techniques for a long time, and Xilinx has been shipping interposer-based stacked-silicon interconnect devices for years now. Altera’s partnership with Intel has given them access to what Intel calls “Embedded Multi-die Interconnect Bridge” (EMIB) technology. Rather than a silicon interposer forming the entire substrate onto which die are affixed using through-silicon vias (TSVs), EMIB uses smaller chip-to-chip bridges embedded into the package substrate. Rather than TSVs, EMIB connects to the die via microbumps for high-density signals and flip-chip bumps for larger (power/ground-type) connections. This means that TSVs are not required for EMIB, and that’s a pretty big deal. Further, since the EMIB doesn’t have to cover the entire area under the die (as an interposer does), the EMIB-based design is not limited to the size of a silicon interposer.

In the case of Stratix 10, connecting the HBM2 DRAM via EMIB accomplishes more than just a highly obfuscated wall of acronyms. EMIB offers very short connections, even when compared with an interposer. Shorter connections means lower capacitance, higher performance, and less power. The HMB2 DRAM chips themselves are 8Gb each, and they can be stacked up to 8 high, yielding an 8GB 256GB/s lane. Combine four of these stacks and you get 32GB of HBM2 DRAM with a crazy 1TB/s aggregate memory bandwidth connected to your FPGA. 

The benefits don’t stop there, of course. If you were previously using the multi-gigabit transceivers to connect your FPGA to – say, Hybrid Memory Cube (HMC) memory – you now get all those transceivers back to use for something else such as, say, getting data into and out of your FPGA. And, you get back all the power that those transceivers were using to push and pull all that data through those IO pins and your PCB.

So, let’s review. Insane bandwidth – check. Giant “in-package” memory – check. Dramatically lower power – check. Getting your FPGA transceivers and package pins back – check. Not having to route those memory interfaces through your board – check. This won’t be the cheapest way to add memory to your system, but if you are doing applications like data center acceleration, radar, high-performance computing, 8K video, high-speed networking (you know, the stuff most of us buy high-end FPGAs for in the first place), the performance you’ll be able to get with this integrated memory will be more than worth the price premium. In fact, it will probably be the difference in being able to do your application at all. 

This is the kind of thing we imagined when FPGAs started to roll out with new advanced packaging technology, and we’re excited to see more. Altera says that we’ll also see an arrangement similar to what we’ve seen from Xilinx – with transceivers fabricated with a different process mated to FPGAs. In the future, we might also expect silicon photonics to be integrated in a similar manner. 

This idea of separating the base FPGA die from the more transient parts of the design – memory, transceivers, and perhaps even things like processors, should give the new FPGA families more life and more versatility. It means that the FPGA companies won’t have to tape out a new FPGA just to give a different mix of these more esoteric items, and they’ll be able to rapidly adapt to new technology in these areas without spinning a new FPGA. Given the stratospheric cost of a tapeout in 14nm, that could be a very good thing. 

It will also be interesting to watch how Intel’s EMIB plays out against the interposer-based solutions most of the industry is following right now. On the surface, EMIB appears to have some distinct advantages over silicon interposers: potentially larger integration area, easier fabrication (no TSVs required), and shorter, faster connections. However, time will tell how the real strengths and weaknesses stack up.

11 thoughts on “Monstrous Memory”

  1. Pingback: from this source
  2. Pingback: TS Escorts
  3. Pingback: orospu cocuklari
  4. Pingback: Kisah Judi Online
  5. Pingback: DMPK
  6. Pingback: friv
  7. Pingback: satta matka

Leave a Reply

featured blogs
Dec 3, 2021
Believe it or not, I ran into John (he told me I could call him that) at a small café just a couple of evenings ago as I pen these words....
Dec 3, 2021
The annual Design Automation Conference (DAC) is coming up December 5th to 9th, next week. It is in-person in San Francisco's Moscone Center West. It will be available virtually from December... [[ Click on the title to access the full blog on the Cadence Community site...
Dec 1, 2021
We discuss semiconductor lithography and the importance of women in engineering with Mariya Braylovska, Director of R&D for Custom Design & Manufacturing. The post Q&A with Mariya Braylovska, R&D Director, on the Joy of Solving Technical Challenges with a...
Nov 8, 2021
Intel® FPGA Technology Day (IFTD) is a free four-day event that will be hosted virtually across the globe in North America, China, Japan, EMEA, and Asia Pacific from December 6-9, 2021. The theme of IFTD 2021 is 'Accelerating a Smart and Connected World.' This virtual event ...

featured video

Achronix VectorPath Accelerator Card Uses PCIe Gen4 x16 to Communicate with AMD Ryzen PC

Sponsored by Achronix

In this demonstration, the Achronix VectorPath™ accelerator card connects to an AMD Ryzen based PC using PCIe Gen4 x16 interface. The host PC issues commands to have the Speedster™7t FPGA on the VectorPath accelerator card write and read to external GDDR6 memory on the board. These data transactions are performed using the Speedster7t FPGA’s 2D network on chip or NoC which eliminates the need to write complex RTL code to design the host PC to GDDR6 memory interface.

Contact Achronix for a Demonstration of Speedster7t FPGA

featured paper

Enabling Industry 4.0 with Low Power, Secure & Reliable Microcontrollers

Sponsored by Analog Devices

With the manufacturing and industrial markets becoming smarter and marching towards Industry 4.0, the newest Ultra Low Power & Secure Microcontrollers become an essential part of achieving the fastest, most secure, and highest reliability designs for these markets. Maxim's portfolio of MCUs are offered in a variety of low complexity packaging options, with the latest security features, Error Correcting Code on memories, and many other features which help designers make the most out of their products.

Click to read more

featured chalk talk

Yield Explorer and SiliconDash

Sponsored by Synopsys

Once a design goes to tape-out, the real challenges begin. Teams find themselves drowning in data from design-process-test during production ramp-up, and have to cope with data from numerous sources in different formats in the manufacturing test supply chain. In this episode of Chalk Talk, Amelia Dalton chats with Mark Laird of Synopsys in part three of our series on the Silicon LifeCycle Management (SLM) platform, discussing how Yield Explorer and SiliconDash give valuable insight to engineering and manufacturing teams.

Click here for more on the Synopsys Silicon Lifecycle Management Platform