feature article
Subscribe Now

Going Vertical

Ecosystem for Interposer-based Design?

We’ve talked a lot lately in these pages about the impending demise of Moore’s Law. Consensus is that, somewhere around the half-century mark, one of the most astounding prophecies in human history will have finally run its course. Next year, we’ll have a round of FinFET devices that will be so exotic and expensive that only a handful of companies will be able to use them. In the decade that follows, we may or may not reach 10nm and 7nm production – using either esoteric unlikelies like EUV or extreme-brute-force multi-patterning techniques – to solve just some of the multitude of barriers to continued downscaling. 

Sci-fi techniques like carbon nanotubes, graphene-based devices, quantum computing, and that other-one-you-read-about are so far from production practicality that we may not see any of them in widespread use in our lifetimes. While incredible research shows great promise for many of these ideas, they are all back in the silicon-equivalent of the early 1960s in their evolution. The time and engineering it will take them to catch up with and eventually surpass what we can do with silicon today is substantial.

So, for the forseeable future, we are left with a silicon technology base that isn’t much different from what we have today: 20-28nm planar CMOS. However, there are still substantial challenges that we haven’t had time to address, due to the overwhelming pace of Moore’s Law to date. Plenty of important problems with semiconductors could not be resolved, simply because the base silicon technology was such a fast moving target.

One of these is “3D” (or, more appropriately, “2.5D”) packaging. Silicon interposers are just now becoming practical for volume applications, and the benefits could be substantial. As semiconductors have shrunk, the engineering tradeoffs have always favored one thing or another. Analog doesn’t benefit from the same design decisions as digital. Memory isn’t optimized when built on the same process as random logic. When we optimize our semiconductor process for one thing, we’re always compromising something else.

But, since we are typically trying to build a “system on chip” (SoC), and since systems tend to be made up of many different types of components, putting our whole system on one chunk of silicon is a sub-optimal compromise at best. On the other hand, splitting our system into multiple devices introduces a host of other problems – such as PCB complexity, power consumption due to IO drivers that get our signals between chips, footprint, reliability, and cost. 

What we’d really like is a way to have the integration benefits of a monolithic piece of silicon, but the optimization benefits of processes that are tailored to each type of component. This is the realm of the silicon interposer. A silicon interposer is a chip that typically contains only interconnect. You could think of it as a small, silicon-based circuit board. Functional chips are attached to the top of the interposer – just like devices on a PCB. Unlike on a PCB, however, the functional chips are connected via a multitude of tiny through-silicon-vias (TSVs). Two chips connected via a silicon interposer (versus a PCB) benefit from a much larger number of interconnections, much shorter overall routes between devices, faster propagation of signals, and much less power required for chip-to-chip signals. In fact, chip-to-chip connections through an interposer are almost identical to on-chip connections.

This means that we can use a silicon interposer to effectively build larger chips out of many smaller ones. Xilinx has been producing devices like this for several years – starting with their V2000T FPGAs, which use a silicon interposer to connect multiple FPGA chips into what appears to be one giant FPGA. In this type of application, the use of the interposer is really just boosting yield. It’s much easier to make four smaller chips than it is to make one giant one, so the interposer gives us the power to build an almost impossibly large device.

The more interesting applications of interposers, however, are using different devices that have been fabricated with different semiconductor processes – all on the same interposer. We believe Xilinx uses this technique to, for example, combine proven multi-gigabit serial transceivers from a larger process geometry with FPGA fabric chips from a smaller geometry – taking advantage of the density and performance of the smaller geometry for the logic, while benefitting from the stability and analog performance of the larger geometry for the transceivers. 

It doesn’t take much imagination to see where this approach could lead. If we look at a typical system, we see components like analog, RF, power, RAM (both volatile and non-volatile), ROM, processors, programmable logic, and even MEMS, all in the same system. If these components could each be fabricated using a process optimized for that function, and then assembled on a silicon interposer, we would truly have the best of both worlds. We’d get the integration, power efficiency, performance, and reliability of a single chip, combined with the best-of-breed fabrication process for each component.

What we don’t yet have is an ecosystem for widespread production use of such devices by average system design companies.

First, we need an integration house – which assembles and tests our final, interposed, packaged device. This function may well be performed by the same players who do these tasks in today’s semiconductor world, but we have yet to see the high-service approach that brings this conveniently to system design teams in one-stop, hold-your-hand fashion. 

Second, we need component suppliers to agree on standards for interposer-based connectivity, and to produce tested chips specifically for that purpose. We’d like to be able to order our memory slices from a memory company, our processors from a processor company, our FPGAs from an FPGA supplier, and so forth. Then, we’d like that integration house we mentioned to be able stick them all together according to our system design with a minimal amount of fuss, package them, and deliver us working parts.

Third, we need a new set of design tools tailored to interposer-based system integration. These would look a lot like the tools we already use for board-based system design, but, of course, there would be a very different set of back-end capabilities required for creating and verifying the system design integrated on a silicon interposer.

Finally, we’ll need early adopters to pave the way. Those pioneers (and we have several already) will develop the design flows, commoditize the now-exotic technologies and techniques required to make interposer-based design work, and slowly build the confidence of the rest of the engineering world as we watch curiously from the sidelines.

This ecosystem is gradually taking shape already, but it is forming around the currently small number of companies who happen to have problems that can be economically solved by interposers even in today’s rather challenging environment. What we need are a few players with the vision to bundle up the required technologies and turn them into a more turnkey service for the larger system design audience. Then, we’ll see something truly interesting happen.

Leave a Reply

featured blogs
Nov 25, 2020
It constantly amazes me how there are always multiple ways of doing things. The problem is that sometimes it'€™s hard to decide which option is best....
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...
Nov 25, 2020
It might seem simple, but database units and accuracy directly relate to the artwork generated, and it is possible to misunderstand the artwork format as it relates to the board setup. Thirty years... [[ Click on the title to access the full blog on the Cadence Community sit...
Nov 23, 2020
Readers of the Samtec blog know we are always talking about next-gen speed. Current channels rates are running at 56 Gbps PAM4. However, system designers are starting to look at 112 Gbps PAM4 data rates. Intuition would say that bleeding edge data rates like 112 Gbps PAM4 onl...

featured video

Product Update: Broad Portfolio of DesignWare IP for Mobile SoCs

Sponsored by Synopsys

Get the latest update on DesignWare IP® for mobile SoCs, including MIPI C-PHY/D-PHY, USB 3.1, and UFS, which provide the necessary throughput, bandwidth, and efficiency for today’s advanced mobile SoCs.

Click here for more information about DesignWare IP for 5G Mobile

featured paper

Exploring advancements in industrial and automotive markets with 60-GHz radar

Sponsored by Texas Instruments

The industrial and automotive markets have a tremendous need for innovative sensing technologies to help buildings, cities and automobiles sense the world around them and make more intelligent decisions.

Click here to read the article

Featured Chalk Talk

Amplifiers & Comparators Designed for Low Power, Precision

Sponsored by Mouser Electronics and ON Semiconductor

When choosing amplifiers and comparators for low-power, high-precision applications, it pays to have a broad understanding of the latest technology in op amps. There are new types of devices with significant advantages over the traditional go-to parts. In this episode of Chalk Talk, Amelia Dalton chats with Namrata Pandya of ON Semiconductor about choosing the best op amp for your application.

Click here for more information about ON Semiconductor High Performance CMOS Operational Amplifiers