feature article
Subscribe Now

From One Pot to Another

Coventor Brings In a New CTO

A number of years ago, Coventor created a program called SEMulator 3D. Its target usage was for developing MEMS processes and devices, MEMS devices traditionally having the unfortunate characteristic that a specific device typically required a specific process. Coventor refers to it as “virtual fabrication” since it can abstract and model the steps used to fabricate a MEMS wafer.

One of their customers was IBM, and one of the people at IBM working with the tool thought that it would be useful for silicon processes as well. After all, some details and occasional strange materials aside, silicon circuits and MEMS chips are made the same way.

The individual who championed this adaptation of SEMulator 3D to integrated circuits was Dr. David Fried (pronounced “Freed”), and he was recently named CTO at Coventor, moving over from IBM. I spent a few minutes with him on the phone around the time of the announcement to get a sense of his motivations and his focus in the new role.

Of course, motivations are initially expressed in terms of mutual respect between the new team member and the team, but that’s kind of self-evident, or else the transaction wouldn’t have happened. So let’s look beyond that. After all, Dr. Fried has worked with Coventor for many years from the IBM side. Why switch? Yes, title might be nice, but an exalted position at IBM isn’t anything to sneer at either.

So are there any specific inflection points underway or imminent that motivated the move?

The main driver can be summed up in one word: complexity. Of course, complexity can be an incremental frog-boiling experience. You’re working away and things gradually get harder and harder without your realizing how complex they’re getting and the water gets hotter and hotter and before you know it… you’re grenouillabaisse.

This scenario, however true it might be in general, isn’t what’s happened here. You might say that the water suddenly got a step-function hotter, and the frog jumped out of the pot. And straight into a new pot. (At the risk of characterizing the move as an attempted escape, which, of course, it wasn’t…) Dr. Fried sums up this change as the move away from planar processing, resulting in four specific items that have ratcheted up the water temperature:

  • FinFETs
  • 3-D ICs
  • Dramatically more sophisticated gate stacks
  • A much more complex backend

FinFETs are, of course, the poster child for this change, with 3-D ICs being the other embodiment of a third dimension. The gate stacks and backend complications result from these two fundamental changes.

The role he envisions for SEMulator 3D shifts (or increases) from virtual fabrication to virtual metrology. There are so many variables now, and the range of variation is so high, that it would be impossible to run full suites of wafers to evaluate them all.

The outcomes that depend on this work define the essential elements of a process – and, most importantly, identify the parameters within which designs must be targeted in order to yield well and make money. This means all the design rules, patterns to be matched, and DFM corrections. There are so many moving parts that sifting out the thousands of rules and adjustments requires enormous amounts of data – data that would be incredibly time-consuming to acquire should a full complement of physical experiments have to be run.

Granted, you have to run some real wafers to figure this stuff out; you can’t simply simulate your way there. But part of the role that Dr. Fried sees for SEMulator 3D is to sort through the permutations and combinations to identify a subset of all of the possible experiments to conduct. The idea is to collect the richest set of data with the fewest experiments; they will position their tool to help do that.

Of course, even once this work is all done, he sees the interaction between foundry and fabless company changing. Traditionally, designers get a menu of devices from which to choose. Those devices have been well-characterized, and that characterization data informs the simulation tools that the designer uses to verify his or her design. We’re seeing that start to change with the 20-nm node, where, at least at present, you can get any color transistor you want, as long as it’s black.

But, as Dr. Fried sees it, it won’t stop there. Fabless folks will increasingly be unable to remain disconnected from the dirty details of the transistors and other devices they’re going to use. He sees an increasing need for the designers to work with the foundries to develop what will essentially be custom devices, tailored to meet the needs of the specific designs being created.

This has a distinct back-to-the-future feel; unless my memory is failing me, I could swear that, early on, our silicon designs were custom-sized, with each device being lovingly dimensioned and tweaked and re-tweaked as needed to tune the performance of the circuit – all few-dozen transistors of it. I guess the difference (not counting the obvious orders of magnitude by which the device count has changed) is the explosion in the number of knobs to tweak and the fact that your fab counterpart happens to work for a different company.

The other big change, of course, is the fact that we’re no longer concerning ourselves with simply a single die. 2.5-D and 3-D ICs now merge multiple dice with connections either directly to each other or routed on interposers. Some of this is incremental – the redistribution layers (RDLs) on the back of the wafer are simply another set of layers. But a new program modeling that process will have to depart from the usual bottom-up approach because you don’t start at the bottom: you do the top set of processing and then flip the thing and do the bottom. TSVs typically involve drilling from the top and then grinding off the back. While this means a change to the layering model, it’s only a nominal change, allowing both bottom-up and top-down (or center-up/center-down) layering.

It gets yet more complex once you start bringing the pieces together inside a package. It could be tempting to model everything as simply a more complex stack of layers. But that fails for a number of reasons:

  • We’ve already seen that this model falls short even for the RDLs.
  • Some dice might be flipped, with their microbumps landing on the pads of a receiving die. So what’s up for one die is down for another.
  • If you have two dice mounted side-by-side on a transposer, those two dice are likely to have been made on entirely different processes. So their layers are not like those of the geological column, where you can align strata from Wales and Nebraska. They simply don’t correspond. Building them up next to each other would be a case of shoehorning reality into an existing model rather than modeling the new reality.

So this suggests another layer of hierarchy, where each die or interposer has its layer-by-layer build-up (towards the top or bottom) and then, at a higher level, these things are brought together – which isn’t necessarily a layer-oriented process.

It will be interesting to see how (if) this higher level of abstraction is worked into the tool; it clearly represents a non-incremental inflection in the methodology.

So clearly Dr. Fries and the Coventor team have their work cut out for them. As does anyone with the temerity to try to model the extraordinary complexity that, just when you think it can’t get worse, does get worse.

Makes you wonder if it’s time to jump out of the pot.

One thought on “From One Pot to Another”

Leave a Reply

featured blogs
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...
Apr 30, 2024
Analog IC design engineers need breakthrough technologies & chip design tools to solve modern challenges; learn more from our analog design panel at SNUG 2024.The post Why Analog Design Challenges Need Breakthrough Technologies appeared first on Chip Design....

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Altera® FPGAs and SoCs with FPGA AI Suite and OpenVINO™ Toolkit Drive Embedded/Edge AI/Machine Learning Applications

Sponsored by Intel

Describes the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Click here to read more

featured chalk talk

IoT Data Analysis at the Edge
No longer is machine learning a niche application for electronic engineering. Machine learning is leading a transformative revolution in a variety of electronic designs but implementing machine learning can be a tricky task to complete. In this episode of Chalk Talk, Amelia Dalton and Louis Gobin from STMicroelectronics investigate how STMicroelectronics is helping embedded developers design edge AI solutions. They take a closer look at the benefits of STMicroelectronics NanoEdge-AI® Studio and  STM32Cube.AI and how you can take advantage of them in your next design. 
Jun 28, 2023
35,392 views