feature article
Subscribe Now

3D’s Supporting Players

3D gets lots of attention these days. Whether it’s the massive success of a movie that spawns a gaggle of followers making every possible consumer item 3D, the added dimension of the fin on a FinFET (or tri-gate) transistor, or the stacking of chips using TSVs or other technology, you just can’t seem to go wrong with 3D.

When it comes to stacking dice, it’s the TSVs or the silicon interposers that tend to take center stage. Not surprising, since they’re new and interesting technology. And we’ve already taken a look at Mentor’s and Cadence’s approaches to DFT with TSVs. But what about the rest of the design process? Does everyone else proceed as usual while the 3D star preens and acts out?

The answer to that is actually yes and no. Yes, things do need to change. And they will need to continue to change for a while. But, in principle, life pretty much goes on.

Let’s unpack that.

Stacking dice puts in place a fundamental change: the cohabitating chips can no longer necessarily be designed in isolation. The ability of one IC to talk to a separate IC (in a separate package) has been the domain of standards bodies ever since there were ICs. Because of the I/O standards, each die can be designed and packaged any way a company wants as long as it remains faithful to that physical and electrical interface.

That’s no longer the case. If one die is going to have micro-bumps, then there had better be some landing pads on the die with which it will mate. And that’s just the start. A signal that begins on one die, traverses a TSV and micro-bump, and continues on into another die used to be considered two separate signals, independently characterized for performance by the respective chip designers, and connected via a board trace (or perhaps a bonding wire) by somebody else (a packaging or board guy). With 3D, more than ever, it’s a single extended trace. But conventional IC design tools historically stopped where the IC stops.

For the time being, much of this is mitigated by the use of silicon interposers for a 2½-D solution. Using the interposer reduces some of the risk that accompanies the promise of full chip-on-chip action. Each IC can still be designed separately, and the interposer can then help the chips meet in the middle. With as many as four routing layers, an interposer can accommodate about 10,000 connections – roughly an order of magnitude more than what’s typically needed, according to Synopys’s Steve Smith, providing plenty of flexibility.

Full 3D chips add a layer or two on the backside of the die – a so-called redistribution layer, or RDL – to make the TSV placement somewhat independent of the micro-bump positions. Generating these layers requires no new technology. But with thousands of signals to route and only a couple of layers for routing, you can’t assume that any arbitrary TSV and bump placements will work; you may have to swap things around to reduce congestion.

With respect to circuit layout, the stresses of TSV bonding can alter the behavior of nearby circuitry, so you need “keep-out” zones around each one. New layout rules in the DRC deck enforce that. Those are pretty much taken care of by the PDK or the technology gurus such that the average designer won’t have to do much different.

For performance analysis, you have new physical interconnect elements – the TSV and micro-bumps – to model for their performance impact. You also need models of the continuation of the signal path on the other die.

According to Synopsys’s Marco Casale-Rossi, the sophistication of the TSV models continues to evolve. They started as capacitors only and now consist of a single RC element; more extravagant modeling and extraction will come. At some point, when TSVs get smaller and are packed more densely, TSV cross-talk will also have to be considered.

The technology for including the models of more than one die has been around for a long time, used by packaging or systems engineers taking care of the interconnected performance of multiple components. So these changes will mostly be barely noticeable.

Things get a bit trickier with the power grid. The power for the “inner dice” – that is, the ones that don’t have any direct connections to the outside world – has to be delivered via the chip that does get external access. That power is then redistributed through TSVs and the interposer. So the power grid has to be analyzed as a whole.

Thermal issues are also a big concern, so a complete thermal analysis is important to ensure that all the heat generated internally can find its way out. But this is no different than existing thermal concerns for multi-chip packages. The only difference is that, with thinner dice, you may have to be more aggressive about how you dissipate the heat, but the tools for figuring that out already exist.

Attacking the stack

There are two chip stack design scenarios to consider. In the first, all of the dice in the stack are designed at the same time by collaborating teams. In the second, at least one die – say, a memory – is already designed and is being integrated with some new dice.

The difference is that, in the first case, all the chip design information is in play at the same time, usually within the same company, so layers and dimensions can be easily shared. In the second case, some other company has already designed and started manufacturing the die, and it must provide some amount of necessary data to customers integrating the die with others.

The most common component being integrated today is memory, and JEDEC has been working on a WideIO standard, which establishes a common configuration for the connections to a stack of memories (or “memory cube”). Mr.  Smith sees such interconnect standards and interposers dominating for the next 4-5 years. Mentor’s Michael White sees full “heterogeneous” stacks being designed a couple years out.

If there isn’t an outright standard, it is technically possible to transfer the TSV and micro-bump layers of the existing die to help design the new die. But, realistically, according to Magma’s Hamid Savoj and Mr. Smith, designers position the interconnect manually today, with spreadsheets being the most common tool.

According to Cadence’s Samta Bansal, Cadence uses a “die abstract” that contains the x-y coordinates of the connections. In a multi-die design, even if all the dice are being designed from scratch, one of the dice typically dominates, and its layout is then incorporated into the layout of other dice via this die abstract.

One question that comes up for co-designed chips is whether verification should be done chip-by-chip or by doing what Mentor calls a “mega-merge,” creating one giant GDS-II file representing all of the chips destined for the stack. Mentor recommends against this mega-merge even though it’s conceptually simple: rule decks from different dice may collide; it’s tough to manage ECOs for individual dice; and you have to assume full GDS-II info is available for all of the dice. Instead, they recommend continuing the verification of individual dice and then doing interface verification to check out connectivity and acquire parasitics.

All of the full-flow EDA guys can handle 3D IC designs. Mentor is focusing in the short term on what they call “coarse-grained” usage with traditional tools while developing “fine-grained” models for better TSV, stress, power, and thermal modeling. Likewise, Cadence has been working with lead customers on a flow, and Ms. Bansal says that they have already put several chips through the flow to prove it out. Synopsys has also been working on projects, and they see the 3D element very much as being an incremental change to existing design practices, largely being done by companies that already have experience in doing complex packaging and die integration.

Meanwhile, Magma, not being a full-flow provider, has extended their tools to handle the extra modeling elements for extraction and analysis; Mr. Savoj says that their chip finishing tools can handle chips intended for 3D.

The one big area that hasn’t been fully developed yet is full-stack floorplanning or exploration. Such tools will allow designers to jointly plan the design of all of the dice in the package, optimizing TSV placement, the power grid, and thermal dissipation at the same time, before the dice are designed. All of these are handled ad hoc today. Despite an early pathfinding project, Magma sees this being commercially available in 2012; Cadence sees customers wanting this in 2014-15.

So, at this point, we’re more or less mid-way to full-on 3D IC stack design. Existing tools and models have been augmented to allow for TSVs and the extra dimension. Models are a bit rough right now, with TSVs being “black-boxed”; future work will refine the modeling and extraction. The minimal extra design considerations are somewhat rough-and-ready, with some manual steps needed to transfer information from die to die. And planning and optimization are local, not global.

On the other hand, companies aren’t rushing into this; they’re approaching gingerly, adding a half-dimension first through interposers and easing their way into full 3D. Says Mr. Smith, “We’re all learning in parallel,” and the economics of stacking dice like this have yet to prove themselves. We’ll come back and review the state of the technology in the future, as the next steps become more apparent.

Image: Joaquim Alves Gaspar

Leave a Reply

featured blogs
Jan 21, 2020
For many years computer systems have augmented CPUs with special purpose accelerators that are targeted at specialized tasks. Examples of these co-processors include special purpose graphics and digital signal processors. Lately there has been an interest and significant rese...
Jan 21, 2020
My mother is convinced that I'm a prodigy (for some reason she spells it "i-m-b-e-c-i-l-e"), but it may be that innate genius lies dormant in all of us....
Jan 20, 2020
As you probably know, discrete wire component data is quite a bit different than just standard socket and terminal mating relationships. When we look at how Samtec approaches discrete wire products, there are several components involved. We not only sell the assemblies, but w...
Jan 17, 2020
[From the last episode: We saw how virtual memory helps resolve the differences between where a compiler thinks things will go in memory and the real memories in a real system.] We'€™ve talked a lot about memory '€“ different kinds of memory, cache memory, heap memory, vi...

Featured Video

RedFit IDC SKEDD Connector

Sponsored by Wurth Electronics and Mouser Electronics

Why attach a header connector to your PCB when you really don’t need one? If you’re plugging a ribbon cable into your board, particularly for a limited-use function such as provisioning, diagnostics, or testing, it can be costly and clunky to add a header connector to your BOM, and introduce yet another component to pick and place. Wouldn’t it be great if you could plug directly into your board with no connector required on the PCB side? In this episode of Chalk Talk, Amelia Dalton chats with Ben Arden from Wurth Electronics about Redfit, a slick new connector solution that plugs directly into standard via holes on your PCB.

Click here for more information about Wurth Electronics REDFIT IDC SKEDD Connector