feature article
Subscribe Now

3D IC Flow Challenges

Cadence Talks About the Toughest Bits

Cadence recently announced a design flow for CoWoS 3D ICs. We’ve looked at some of the issues surrounding 3D IC technology before, but what we’ve looked at less are the specific ways designers will implement 3D ICs within their EDA flows.

So I talked with Cadence’s Brandon Wang to see what specific challenges they encountered and addressed in the flow that they announced. He listed three primary elements: heat, testing, and stress.

Things are heating up

It’s no surprise that thermal concerns top the list. Ever since the first discussions involving the stacking of ICs, one of the top questions has been, how will we dissipate the heat? Things are particularly tricky for memories, which generally have less generous temperature specs. The sense amps can be thrown off if the temperature gets too hot. Integrated memory: good. Integrated memory that doesn’t work: not so useful.

The memory situation is made worse by something else that’s supposed to be good: lots of I/Os. The whole wide-I/O thing is a real boon for bandwidth, but it also means you have lots of signals running at the same frequency and often sharing edges. While it’s not as bad as trying to drive actual package I/Os, it can still raise power and cause local heating.

So a designer will need to consider what’s underneath a memory chip – are there particular hot spots there? Might there be better ways to position the memory?  Or would a different underlying chip floorplan improve the situation?

And of course, memories aren’t the only chips that warrant consideration; they’re just really obvious one. You might imagine that an analog chip might also be impacted by heat from an underlying die.

Clearly a thermal analysis tool is in order for understanding how the heat’s going to flow. But traditional thermal tools have benefited from the fact that it can take seconds for heat in a die to migrate out through the package and leadframe. Such relatively slow dynamics have allowed what is effectively static thermal analysis.

But the distance between one die and another in a 3D IC is much smaller, and there might even be heat-dissipating materials in the gap. So the heat can move from one die to another in a mere 0.4 seconds, much less time than before. What that effectively means is that dynamic thermal analysis tools are needed. Such tools are very time-consuming to run today; Cadence says they’re working on new versions that will run much more quickly.

Tests are telling tales

Testing gets more complicated, not for any specific technical reasons, but because the business model is more complicated. With standard chips, one company builds the chip and puts it in a package. (OK, a separate company somewhere cheap may do the actual assembly, but only as a subcontractor to the chipmaker.) So when things go wrong, the locus of responsibility is pretty easy to identify.

But it will be a rare 3D IC where everything in the package is made by the same company. In most cases, a company will design an SoC and pair it up with chips made by other companies – DRAMs, non-volatile memory, and perhaps an analog chip or two. Even if each component is running 90-something-% final-test yields on their own, the compounded yield of all of those components along with testing fallout can push overall yield down into the 50% range.  So if yields are crummy, you’ll want to figure out whodunit.

We’ve talked about the “test elevator” concept before; it extends the scan paths in the vertical direction, allowing segmentation of the tests and – critically – identification of whose component failed. Test software has been augmented to handle this new responsibility.

We’ll be talking in more detail about the economics of yield and testing in a future piece (although it will focus more on 2.5-D chips for reasons we’ll elucidate then).

Devices are stressing out

The third issue isn’t an electrical one, but it has electrical implications. 3D integration introduces new mechanical stresses into the system. Those stresses come both intra-die, due to the through-hole vias (TSVs) that get drilled through the chip, and inter-die, due to the bonding of one chip over another and any mismatches in expansion and contraction as things heat up and cool down.

This can affect how the transistors behave. Which should be no surprise, since stress has explicitly been used to improve the mobility of transistors. The problem here is that we’re introducing new stresses, and they haven’t been figured into the characterization data that you rely on in your cell libraries. So the simulations you do based on those libraries will be wrong if the local stresses differ from what the libraries assume.

Now… it is, in principle, possible to adjust for those stresses. That would require a tool that would figure out what those modified stresses are and how they are physically distributed throughout the various dice. No such tool exists, according to Cadence, and my sense is that none is forthcoming.

So rather than relying on an analytical approach, the preferred approach will be rules-based. Most obvious would be the keep-out zones around the TSVs. This ensures that any transistors will be far enough away from the TSV to keep the transistors from feeling the TSV stresses. Likewise, any issues dealing with the placement of one die over another will be handled by rules.

While perhaps intellectually less satisfying, such an approach reflects a pragmatic means of getting something done today rather than waiting for some more elegant future solution that would provide nominal benefit beyond what the rules will enable.

Whether dealing with thermal, testing, or stress, the memory guys are having an outsized say in the standards that are evolving to make all of this happen smoothly. That’s because the economics of memories rely on their being high-volume commodity chips (much as everyone laments being relegated to commodity status). If some guy is designing an SoC that needs a memory on top, the memory guys don’t want to have to create some custom configuration for the convenience of the underlying SoC.

In general, it’s going to be the responsibility of the SoC designer to accommodate all of the off-the-shelf chips that will complete the 3D package. Which seems reasonable. It’s like pulling standard cells out of a library – only these ones happen not to reside on the same die. And they’re not in the library; they’re in some distributor’s catalog.

This whole heterogeneous 3D IC thing (i.e., not a stack of like dice) is just getting started, so it’s reasonable to expect that the flow will be tweaked as real engineers figure out what works and what doesn’t work and relay those thoughts to their friendly neighborhood EDA dude. I’m sure we haven’t heard the last word on this.

 

More info:

Cadence’s 3D IC flow

7 thoughts on “3D IC Flow Challenges”

  1. Pingback: Petplay
  2. Pingback: DMPK
  3. Pingback: agen bola sbobet

Leave a Reply

featured blogs
Nov 30, 2023
Cadence Spectre AMS Designer is a high-performance mixed-signal simulation system. The ability to use multiple engines and drive from a variety of platforms enables you to "rev up" your mixed-signal design verification and take the checkered flag in the race to the ...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at intel.com/leap. The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Medical Grade Power
Sponsored by Mouser Electronics and RECOM
In this episode of Chalk Talk, Amelia Dalton and Louis Bouche from RECOM explore the various design requirements for medical grade power supplies. They also examine the role that isolation and leakage current play in this arena and the solutions that RECOM offers in terms of medical grade power supplies.
Nov 9, 2023
2,514 views