feature article
Subscribe Now

From One Pot to Another

Coventor Brings In a New CTO

A number of years ago, Coventor created a program called SEMulator 3D. Its target usage was for developing MEMS processes and devices, MEMS devices traditionally having the unfortunate characteristic that a specific device typically required a specific process. Coventor refers to it as “virtual fabrication” since it can abstract and model the steps used to fabricate a MEMS wafer.

One of their customers was IBM, and one of the people at IBM working with the tool thought that it would be useful for silicon processes as well. After all, some details and occasional strange materials aside, silicon circuits and MEMS chips are made the same way.

The individual who championed this adaptation of SEMulator 3D to integrated circuits was Dr. David Fried (pronounced “Freed”), and he was recently named CTO at Coventor, moving over from IBM. I spent a few minutes with him on the phone around the time of the announcement to get a sense of his motivations and his focus in the new role.

Of course, motivations are initially expressed in terms of mutual respect between the new team member and the team, but that’s kind of self-evident, or else the transaction wouldn’t have happened. So let’s look beyond that. After all, Dr. Fried has worked with Coventor for many years from the IBM side. Why switch? Yes, title might be nice, but an exalted position at IBM isn’t anything to sneer at either.

So are there any specific inflection points underway or imminent that motivated the move?

The main driver can be summed up in one word: complexity. Of course, complexity can be an incremental frog-boiling experience. You’re working away and things gradually get harder and harder without your realizing how complex they’re getting and the water gets hotter and hotter and before you know it… you’re grenouillabaisse.

This scenario, however true it might be in general, isn’t what’s happened here. You might say that the water suddenly got a step-function hotter, and the frog jumped out of the pot. And straight into a new pot. (At the risk of characterizing the move as an attempted escape, which, of course, it wasn’t…) Dr. Fried sums up this change as the move away from planar processing, resulting in four specific items that have ratcheted up the water temperature:

  • FinFETs
  • 3-D ICs
  • Dramatically more sophisticated gate stacks
  • A much more complex backend

FinFETs are, of course, the poster child for this change, with 3-D ICs being the other embodiment of a third dimension. The gate stacks and backend complications result from these two fundamental changes.

The role he envisions for SEMulator 3D shifts (or increases) from virtual fabrication to virtual metrology. There are so many variables now, and the range of variation is so high, that it would be impossible to run full suites of wafers to evaluate them all.

The outcomes that depend on this work define the essential elements of a process – and, most importantly, identify the parameters within which designs must be targeted in order to yield well and make money. This means all the design rules, patterns to be matched, and DFM corrections. There are so many moving parts that sifting out the thousands of rules and adjustments requires enormous amounts of data – data that would be incredibly time-consuming to acquire should a full complement of physical experiments have to be run.

Granted, you have to run some real wafers to figure this stuff out; you can’t simply simulate your way there. But part of the role that Dr. Fried sees for SEMulator 3D is to sort through the permutations and combinations to identify a subset of all of the possible experiments to conduct. The idea is to collect the richest set of data with the fewest experiments; they will position their tool to help do that.

Of course, even once this work is all done, he sees the interaction between foundry and fabless company changing. Traditionally, designers get a menu of devices from which to choose. Those devices have been well-characterized, and that characterization data informs the simulation tools that the designer uses to verify his or her design. We’re seeing that start to change with the 20-nm node, where, at least at present, you can get any color transistor you want, as long as it’s black.

But, as Dr. Fried sees it, it won’t stop there. Fabless folks will increasingly be unable to remain disconnected from the dirty details of the transistors and other devices they’re going to use. He sees an increasing need for the designers to work with the foundries to develop what will essentially be custom devices, tailored to meet the needs of the specific designs being created.

This has a distinct back-to-the-future feel; unless my memory is failing me, I could swear that, early on, our silicon designs were custom-sized, with each device being lovingly dimensioned and tweaked and re-tweaked as needed to tune the performance of the circuit – all few-dozen transistors of it. I guess the difference (not counting the obvious orders of magnitude by which the device count has changed) is the explosion in the number of knobs to tweak and the fact that your fab counterpart happens to work for a different company.

The other big change, of course, is the fact that we’re no longer concerning ourselves with simply a single die. 2.5-D and 3-D ICs now merge multiple dice with connections either directly to each other or routed on interposers. Some of this is incremental – the redistribution layers (RDLs) on the back of the wafer are simply another set of layers. But a new program modeling that process will have to depart from the usual bottom-up approach because you don’t start at the bottom: you do the top set of processing and then flip the thing and do the bottom. TSVs typically involve drilling from the top and then grinding off the back. While this means a change to the layering model, it’s only a nominal change, allowing both bottom-up and top-down (or center-up/center-down) layering.

It gets yet more complex once you start bringing the pieces together inside a package. It could be tempting to model everything as simply a more complex stack of layers. But that fails for a number of reasons:

  • We’ve already seen that this model falls short even for the RDLs.
  • Some dice might be flipped, with their microbumps landing on the pads of a receiving die. So what’s up for one die is down for another.
  • If you have two dice mounted side-by-side on a transposer, those two dice are likely to have been made on entirely different processes. So their layers are not like those of the geological column, where you can align strata from Wales and Nebraska. They simply don’t correspond. Building them up next to each other would be a case of shoehorning reality into an existing model rather than modeling the new reality.

So this suggests another layer of hierarchy, where each die or interposer has its layer-by-layer build-up (towards the top or bottom) and then, at a higher level, these things are brought together – which isn’t necessarily a layer-oriented process.

It will be interesting to see how (if) this higher level of abstraction is worked into the tool; it clearly represents a non-incremental inflection in the methodology.

So clearly Dr. Fries and the Coventor team have their work cut out for them. As does anyone with the temerity to try to model the extraordinary complexity that, just when you think it can’t get worse, does get worse.

Makes you wonder if it’s time to jump out of the pot.

One thought on “From One Pot to Another”

Leave a Reply

featured blogs
Dec 1, 2023
Why is Design for Testability (DFT) crucial for VLSI (Very Large Scale Integration) design? Keeping testability in mind when developing a chip makes it simpler to find structural flaws in the chip and make necessary design corrections before the product is shipped to users. T...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

TDK CLT32 power inductors for ADAS and AD power management

Sponsored by TDK

Review the top 3 FAQs (Frequently Asked Questions) regarding TDK’s CLT32 power inductors. Learn why these tiny power inductors address the most demanding reliability challenges of ADAS and AD power management.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

Addressing the Challenges of Low-Latency, High-Performance Wi-Fi
In this episode of Chalk Talk, Amelia Dalton, Andrew Hart from Infineon, and Andy Ross from Laird Connectivity examine the benefits of Wi-Fi 6 and 6E, why IIoT designs are perfectly suited for Wi-Fi 6 and 6E, and how Wi-Fi 6 and 6E will bring Wi-Fi connectivity to a broad range of new applications.
Nov 17, 2023
1,813 views