feature article
Subscribe Now

Extracting a Life

Cutting corners never seemed like cutting corners before. It seemed like a practical way to handle real-world problems. You don’t want to make any problem more complex than it needs to be or it will be hard to solve on an everyday basis. So you simplify.

But if the world gets more complicated, then what were once practical simplifications now look like cutting corners, and it’s no longer acceptable.

And so it goes with RC extraction tools. You just can’t ignore the things you used to be able to ignore. This sleepy little corner of EDA has historically drifted quietly along in the shadows, but, more recently, has made some moves towards the spotlight.

And for good reason. The crazy geometries being built in foundries result in all kinds of interactions that never used to occur (or matter). So the simple rule-based approaches of yesteryear are quickly giving way to sophisticated mathematical juggernauts; the question is, how do you balance accuracy against the amount of time it takes to calculate?

We looked at some of the field solver technology that’s become central to these tools last year, but that discussion focused on the technology itself – with a couple small startups providing examples of two competing approaches to the problem.

But the entire landscape has started to morph as incumbents and pretenders (in the old to-the-throne sense), companies big and small, are making noise about their new approaches to extraction. So let’s take a look at the landscape as it appears to this simple scribe and then use that as context for a couple of recent announcements.

First of all, to be clear, the purpose of these tools is to understand how various different structures on a chip affect each other electrically. This used to refer mostly to devices collocated side-by-side in a monolithic silicon wafer, but is increasingly important for 3D considerations as die-stacking and through-silicon vias (TSVs) move into the mainstream.

In the old days, you could model these interactions as lumped parasitic resistors and capacitors and build a nice simple circuit network (one that seems simple only in retrospect). No longer: now the goal is full-wave analysis, solving Maxwell’s equations. But that’s hard and takes a long time, so companies approximate or approach it using varying clever techniques to get the best balance. Each company hopes to be the cleverest.

Broadly speaking, extraction tools can be divided into two categories: those that are used to model single devices or small circuits extraordinarily accurately and those that are used for sign-off on a full chip. Obviously the difference here is scale, and the tradeoff is accuracy vs. getting results quickly. The most accurate tools tend to be reserved for foundries developing processes, working one device at a time. Their full-chip brethren are increasingly used by designers as tasks once reserved for sign-off migrate up the tool flow in order to reduce the number of design iteration loops.

In the ultra-accurate camp there appear to be two tools contending for gold-standard reference status. Synopsys has Raphael and Magma has QuickCap. Both use the phrases “gold standard” and “reference” in their promotional materials.

While bragging rights may accrue to those that have options for the highest accuracy, the bigger market is for a tool that designers can use, and this is where there is more variety. Raphael has historically had a quicker, less-accurate sibling called Raphael NXT that uses random-walk technology; QuickCap has a similar sibling called QuickCap NX, also using random walk.

The Synopsys situation is slightly complicated by the fact that they also have another tool, StarRC (also labeled a “gold-standard”) that is, according to some of the materials, a rule-based tool. But, looking closer, it actually integrates two different solver engines: one rule-based (ScanBand) and one that’s more accurate for critical nets (which used to be Raphael NXT; more on that in a minute).

Likewise, Magma has a Quartz RC tool that uses pattern-matching for full-chip extraction, leveraging QuickCap if necessary for critical nets.

Not to be left out, Cadence also addresses this space with their QRC Extraction tool, which has a range of options to cover high accuracy as well as statistical capabilities for faster run times.

Meanwhile, as we saw last year, Silicon Frontline is moving into the scene using random-walk technology. They claim to be the only solver in TSMC’s newly announced AMS Reference Flow. More on that momentarily.

So, with that context in place, what’s happening more recently?

Well, first of all, Synopsys has announced a new solver engine, called Rapid3D, that replaces Raphael NXT; the latter is no more, although they mention that the new engine has roots in the old one. Rapid3D now sits within StarRC (all versions) alongside the rule-based ScanBand engine as the higher-accuracy option.

Meanwhile, Mentor has come swaggering in with a completely new tool that uses technology acquired through their purchase of Pextra. They call it xACT 3D, and they claim that it’s a “no compromises” approach combining the accuracy and determinism of a full-mesh solver with the enough speed to handle full chips – an order of magnitude faster than other solvers. They also claim to be in TSMC’s AMS Reference Flow (which means that either they or Silicon Frontline is wrong).

Mentor is very tight-lipped about how they achieve their claims. They were willing to allow that the modeling methods they use are standard and that their magic resides within clever mathematics that Pextra brought to the table. But even that admission was conceded with caution.

One issue that is being raised and swatted down in this fight for extractive supremacy is the validity or invalidity of the statistical methods used in solvers based on random-walk technology. Mentor argues that, with a statistical approach, there will always be a number of “outlier” nets that have significant errors. Both Synopsys and Silicon Frontline say that’s not correct (to quote Silicon Frontline’s Dermott Lynch, it’s a “complete fabrication”).

The defense of random walk is that the solutions always converge within an accuracy that can be defined by the user; this convergence happens net-by-net (not as a whole, with some nets being accurate and others not). The accuracy is determined, they say, by the statistics: just as with polling, if you want less uncertainty, you increase your sample size, and there’s no guesswork in selecting the sample size – it’s a well-accepted science.

The other concern raised is the repeatability of results using a statistical method, since it involves some random elements. But if each run converges to a deterministic limit, then results themselves should be repeatable (within the selected margin); it’s just that the specific paths randomly walked to get to the solution will be different for each run – which doesn’t matter since the result is what counts.

And so the claims and counter-claims fly as this sleepy little corner of the EDA world gets something of a life. The energy with which these things are being marketed may make for some interesting watching.

More info:

Cadence QRC Extraction

Magma Quartz RC

Mentor xACT 3D

Silicon Frontline

Synopsys StarRC

Leave a Reply

featured blogs
Jul 29, 2021
Circuit checks enable you to analyze typical design problems, such as high impedance nodes, leakage paths between power supplies, timing errors, power issues, connectivity problems, or extreme rise... [[ Click on the title to access the full blog on the Cadence Community sit...
Jul 29, 2021
Learn why SoC emulation is the next frontier for power system optimization, helping chip designers shift power verification left in the SoC design flow. The post Why Wait Days for Results? The Next Frontier for Power Verification appeared first on From Silicon To Software....
Jul 28, 2021
Here's a sticky problem. What if the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries?...
Jul 9, 2021
Do you have questions about using the Linux OS with FPGAs? Intel is holding another 'Ask an Expert' session and the topic is 'Using Linux with Intel® SoC FPGAs.' Come and ask our experts about the various Linux OS options available to use with the integrated Arm Cortex proc...

featured video

Vibrant Super Resolution (SR-GAN) with DesignWare ARC EV Processor IP

Sponsored by Synopsys

Super resolution constructs high-res images from low-res. Neural networks like SR-GAN can generate missing data to achieve impressive results. This demo shows SR-GAN running on ARC EV processor IP from Synopsys to generate beautiful images.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

Hyperconnectivity and You: A Roadmap for the Consumer Experience

Sponsored by Cadence Design Systems

Will people’s views about hyperconnectivity and hyperscale computing affect requirements for your next system or IC design? Download the latest Cadence report for how consumers view hyperscale computing’s impact on cars, mobile devices, and health.

Click to read more

featured chalk talk

PolarFire SoC FPGA Family

Sponsored by Mouser Electronics and Microchip

FPGA SoCs can solve numerous problems for IoT designers. Now, with the growing momentum behind RISC-V, there are FPGA SoCs that feature RISC-V cores as well as low-power, high-security, and high-reliability. In this episode of Chalk Talk, Amelia Dalton chats with KK from Microchip Technology about the new PolarFire SoC family that is ideal for demanding IoT endpoint applications.

Click here for more information about Microchip Technology PolarFire® SoC FPGA Icicle Kit