feature article
Subscribe Now

Going Vertical

Ecosystem for Interposer-based Design?

We’ve talked a lot lately in these pages about the impending demise of Moore’s Law. Consensus is that, somewhere around the half-century mark, one of the most astounding prophecies in human history will have finally run its course. Next year, we’ll have a round of FinFET devices that will be so exotic and expensive that only a handful of companies will be able to use them. In the decade that follows, we may or may not reach 10nm and 7nm production – using either esoteric unlikelies like EUV or extreme-brute-force multi-patterning techniques – to solve just some of the multitude of barriers to continued downscaling. 

Sci-fi techniques like carbon nanotubes, graphene-based devices, quantum computing, and that other-one-you-read-about are so far from production practicality that we may not see any of them in widespread use in our lifetimes. While incredible research shows great promise for many of these ideas, they are all back in the silicon-equivalent of the early 1960s in their evolution. The time and engineering it will take them to catch up with and eventually surpass what we can do with silicon today is substantial.

So, for the forseeable future, we are left with a silicon technology base that isn’t much different from what we have today: 20-28nm planar CMOS. However, there are still substantial challenges that we haven’t had time to address, due to the overwhelming pace of Moore’s Law to date. Plenty of important problems with semiconductors could not be resolved, simply because the base silicon technology was such a fast moving target.

One of these is “3D” (or, more appropriately, “2.5D”) packaging. Silicon interposers are just now becoming practical for volume applications, and the benefits could be substantial. As semiconductors have shrunk, the engineering tradeoffs have always favored one thing or another. Analog doesn’t benefit from the same design decisions as digital. Memory isn’t optimized when built on the same process as random logic. When we optimize our semiconductor process for one thing, we’re always compromising something else.

But, since we are typically trying to build a “system on chip” (SoC), and since systems tend to be made up of many different types of components, putting our whole system on one chunk of silicon is a sub-optimal compromise at best. On the other hand, splitting our system into multiple devices introduces a host of other problems – such as PCB complexity, power consumption due to IO drivers that get our signals between chips, footprint, reliability, and cost. 

What we’d really like is a way to have the integration benefits of a monolithic piece of silicon, but the optimization benefits of processes that are tailored to each type of component. This is the realm of the silicon interposer. A silicon interposer is a chip that typically contains only interconnect. You could think of it as a small, silicon-based circuit board. Functional chips are attached to the top of the interposer – just like devices on a PCB. Unlike on a PCB, however, the functional chips are connected via a multitude of tiny through-silicon-vias (TSVs). Two chips connected via a silicon interposer (versus a PCB) benefit from a much larger number of interconnections, much shorter overall routes between devices, faster propagation of signals, and much less power required for chip-to-chip signals. In fact, chip-to-chip connections through an interposer are almost identical to on-chip connections.

This means that we can use a silicon interposer to effectively build larger chips out of many smaller ones. Xilinx has been producing devices like this for several years – starting with their V2000T FPGAs, which use a silicon interposer to connect multiple FPGA chips into what appears to be one giant FPGA. In this type of application, the use of the interposer is really just boosting yield. It’s much easier to make four smaller chips than it is to make one giant one, so the interposer gives us the power to build an almost impossibly large device.

The more interesting applications of interposers, however, are using different devices that have been fabricated with different semiconductor processes – all on the same interposer. We believe Xilinx uses this technique to, for example, combine proven multi-gigabit serial transceivers from a larger process geometry with FPGA fabric chips from a smaller geometry – taking advantage of the density and performance of the smaller geometry for the logic, while benefitting from the stability and analog performance of the larger geometry for the transceivers. 

It doesn’t take much imagination to see where this approach could lead. If we look at a typical system, we see components like analog, RF, power, RAM (both volatile and non-volatile), ROM, processors, programmable logic, and even MEMS, all in the same system. If these components could each be fabricated using a process optimized for that function, and then assembled on a silicon interposer, we would truly have the best of both worlds. We’d get the integration, power efficiency, performance, and reliability of a single chip, combined with the best-of-breed fabrication process for each component.

What we don’t yet have is an ecosystem for widespread production use of such devices by average system design companies.

First, we need an integration house – which assembles and tests our final, interposed, packaged device. This function may well be performed by the same players who do these tasks in today’s semiconductor world, but we have yet to see the high-service approach that brings this conveniently to system design teams in one-stop, hold-your-hand fashion. 

Second, we need component suppliers to agree on standards for interposer-based connectivity, and to produce tested chips specifically for that purpose. We’d like to be able to order our memory slices from a memory company, our processors from a processor company, our FPGAs from an FPGA supplier, and so forth. Then, we’d like that integration house we mentioned to be able stick them all together according to our system design with a minimal amount of fuss, package them, and deliver us working parts.

Third, we need a new set of design tools tailored to interposer-based system integration. These would look a lot like the tools we already use for board-based system design, but, of course, there would be a very different set of back-end capabilities required for creating and verifying the system design integrated on a silicon interposer.

Finally, we’ll need early adopters to pave the way. Those pioneers (and we have several already) will develop the design flows, commoditize the now-exotic technologies and techniques required to make interposer-based design work, and slowly build the confidence of the rest of the engineering world as we watch curiously from the sidelines.

This ecosystem is gradually taking shape already, but it is forming around the currently small number of companies who happen to have problems that can be economically solved by interposers even in today’s rather challenging environment. What we need are a few players with the vision to bundle up the required technologies and turn them into a more turnkey service for the larger system design audience. Then, we’ll see something truly interesting happen.

Leave a Reply

featured blogs
May 7, 2021
In one of our Knowledge Booster Blogs a few months ago we introduced you to some tips and tricks for the optimal use of Virtuoso ADE Product Suite with our analog IC design videos . W e hope you... [[ Click on the title to access the full blog on the Cadence Community site. ...
May 7, 2021
Enough of the letter “P” already. Message recieved. In any case, modeling and simulating next-gen 224 Gbps signal channels poses many challenges. Design engineers must optimize the entire signal path, not just a specific component. The signal path includes transce...
May 6, 2021
Learn how correct-by-construction coding enables a more productive chip design process, as new code review tools address bugs early in the design process. The post Find Bugs Earlier Via On-the-Fly Code Checking for Productive Chip Design and Verification appeared first on Fr...
May 4, 2021
What a difference a year can make! Oh, we're not referring to that virus that… The post Realize Live + U2U: Side by Side appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Four key design considerations when adding energy storage to solar power grids

Sponsored by Texas Instruments

Bidirectional power conversion, higher voltage batteries, current and voltage sensing, and a sleek storage system design are top considerations when adding energy storage to solar power grids. Read the latest whitepaper from Texas Instruments to unleash the power of storage-ready solar power grids.

Click to download whitepaper

featured chalk talk

Automotive Infotainment

Sponsored by Mouser Electronics and KEMET

In today’s fast-moving automotive electronics design environment, passive components are often one of the last things engineers consider. But, choosing the right passives is now more important than ever, and there is an exciting and sometimes bewildering range of options to choose from. In this episode of Chalk Talk, Amelia Dalton chats with Peter Blais from KEMET about choosing the right passives and the right power distribution for your next automotive design.

Click here for more information about KEMET Electronics Low Voltage DC Auto Infotainment Solutions