feature article
Subscribe Now

Extracting a Life

Cutting corners never seemed like cutting corners before. It seemed like a practical way to handle real-world problems. You don’t want to make any problem more complex than it needs to be or it will be hard to solve on an everyday basis. So you simplify.

But if the world gets more complicated, then what were once practical simplifications now look like cutting corners, and it’s no longer acceptable.

And so it goes with RC extraction tools. You just can’t ignore the things you used to be able to ignore. This sleepy little corner of EDA has historically drifted quietly along in the shadows, but, more recently, has made some moves towards the spotlight.

And for good reason. The crazy geometries being built in foundries result in all kinds of interactions that never used to occur (or matter). So the simple rule-based approaches of yesteryear are quickly giving way to sophisticated mathematical juggernauts; the question is, how do you balance accuracy against the amount of time it takes to calculate?

We looked at some of the field solver technology that’s become central to these tools last year, but that discussion focused on the technology itself – with a couple small startups providing examples of two competing approaches to the problem.

But the entire landscape has started to morph as incumbents and pretenders (in the old to-the-throne sense), companies big and small, are making noise about their new approaches to extraction. So let’s take a look at the landscape as it appears to this simple scribe and then use that as context for a couple of recent announcements.

First of all, to be clear, the purpose of these tools is to understand how various different structures on a chip affect each other electrically. This used to refer mostly to devices collocated side-by-side in a monolithic silicon wafer, but is increasingly important for 3D considerations as die-stacking and through-silicon vias (TSVs) move into the mainstream.

In the old days, you could model these interactions as lumped parasitic resistors and capacitors and build a nice simple circuit network (one that seems simple only in retrospect). No longer: now the goal is full-wave analysis, solving Maxwell’s equations. But that’s hard and takes a long time, so companies approximate or approach it using varying clever techniques to get the best balance. Each company hopes to be the cleverest.

Broadly speaking, extraction tools can be divided into two categories: those that are used to model single devices or small circuits extraordinarily accurately and those that are used for sign-off on a full chip. Obviously the difference here is scale, and the tradeoff is accuracy vs. getting results quickly. The most accurate tools tend to be reserved for foundries developing processes, working one device at a time. Their full-chip brethren are increasingly used by designers as tasks once reserved for sign-off migrate up the tool flow in order to reduce the number of design iteration loops.

In the ultra-accurate camp there appear to be two tools contending for gold-standard reference status. Synopsys has Raphael and Magma has QuickCap. Both use the phrases “gold standard” and “reference” in their promotional materials.

While bragging rights may accrue to those that have options for the highest accuracy, the bigger market is for a tool that designers can use, and this is where there is more variety. Raphael has historically had a quicker, less-accurate sibling called Raphael NXT that uses random-walk technology; QuickCap has a similar sibling called QuickCap NX, also using random walk.

The Synopsys situation is slightly complicated by the fact that they also have another tool, StarRC (also labeled a “gold-standard”) that is, according to some of the materials, a rule-based tool. But, looking closer, it actually integrates two different solver engines: one rule-based (ScanBand) and one that’s more accurate for critical nets (which used to be Raphael NXT; more on that in a minute).

Likewise, Magma has a Quartz RC tool that uses pattern-matching for full-chip extraction, leveraging QuickCap if necessary for critical nets.

Not to be left out, Cadence also addresses this space with their QRC Extraction tool, which has a range of options to cover high accuracy as well as statistical capabilities for faster run times.

Meanwhile, as we saw last year, Silicon Frontline is moving into the scene using random-walk technology. They claim to be the only solver in TSMC’s newly announced AMS Reference Flow. More on that momentarily.

So, with that context in place, what’s happening more recently?

Well, first of all, Synopsys has announced a new solver engine, called Rapid3D, that replaces Raphael NXT; the latter is no more, although they mention that the new engine has roots in the old one. Rapid3D now sits within StarRC (all versions) alongside the rule-based ScanBand engine as the higher-accuracy option.

Meanwhile, Mentor has come swaggering in with a completely new tool that uses technology acquired through their purchase of Pextra. They call it xACT 3D, and they claim that it’s a “no compromises” approach combining the accuracy and determinism of a full-mesh solver with the enough speed to handle full chips – an order of magnitude faster than other solvers. They also claim to be in TSMC’s AMS Reference Flow (which means that either they or Silicon Frontline is wrong).

Mentor is very tight-lipped about how they achieve their claims. They were willing to allow that the modeling methods they use are standard and that their magic resides within clever mathematics that Pextra brought to the table. But even that admission was conceded with caution.

One issue that is being raised and swatted down in this fight for extractive supremacy is the validity or invalidity of the statistical methods used in solvers based on random-walk technology. Mentor argues that, with a statistical approach, there will always be a number of “outlier” nets that have significant errors. Both Synopsys and Silicon Frontline say that’s not correct (to quote Silicon Frontline’s Dermott Lynch, it’s a “complete fabrication”).

The defense of random walk is that the solutions always converge within an accuracy that can be defined by the user; this convergence happens net-by-net (not as a whole, with some nets being accurate and others not). The accuracy is determined, they say, by the statistics: just as with polling, if you want less uncertainty, you increase your sample size, and there’s no guesswork in selecting the sample size – it’s a well-accepted science.

The other concern raised is the repeatability of results using a statistical method, since it involves some random elements. But if each run converges to a deterministic limit, then results themselves should be repeatable (within the selected margin); it’s just that the specific paths randomly walked to get to the solution will be different for each run – which doesn’t matter since the result is what counts.

And so the claims and counter-claims fly as this sleepy little corner of the EDA world gets something of a life. The energy with which these things are being marketed may make for some interesting watching.

More info:

Cadence QRC Extraction

Magma Quartz RC

Mentor xACT 3D

Silicon Frontline

Synopsys StarRC

Leave a Reply

featured blogs
May 21, 2022
May is Asian American and Pacific Islander (AAPI) Heritage Month. We would like to spotlight some of our incredible AAPI-identifying employees to celebrate. We recognize the important influence that... ...
May 20, 2022
I'm very happy with my new OMTech 40W CO2 laser engraver/cutter, but only because the folks from Makers Local 256 helped me get it up and running....
May 19, 2022
Learn about the AI chip design breakthroughs and case studies discussed at SNUG Silicon Valley 2022, including autonomous PPA optimization using DSO.ai. The post Key Highlights from SNUG 2022: AI Is Fast Forwarding Chip Design appeared first on From Silicon To Software....
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...

featured video

Synopsys PPA(V) Voltage Optimization

Sponsored by Synopsys

Performance-per-watt has emerged as one of the highest priorities in design quality, leading to a shift in technology focus and design power optimization methodologies. Variable operating voltage possess high potential in optimizing performance-per-watt results but requires a signoff accurate and efficient methodology to explore. Synopsys Fusion Design Platform™, uniquely built on a singular RTL-to-GDSII data model, delivers a full-flow voltage optimization and closure methodology to achieve the best performance-per-watt results for the most demanding semiconductor segments.

Learn More

featured paper

Reduce EV cost and improve drive range by integrating powertrain systems

Sponsored by Texas Instruments

When you can create automotive applications that do more with fewer parts, you’ll reduce both weight and cost and improve reliability. That’s the idea behind integrating electric vehicle (EV) and hybrid electric vehicle (HEV) designs.

Click to read more

featured chalk talk

Flexible Power for a Smart World

Sponsored by Mouser Electronics and CUI Inc.

Safety, EMC compliance, your project schedule, and your BOM cost are all important factors when you are considering what power supply you will need for your next design. You also need to think about form factor, which capacitor will work best, and more. But if you’re not a power supply expert, this can get overwhelming in a hurry. In this episode of Chalk Talk, Amelia Dalton chats with Ron Stull from CUI Inc. about CUI PBO Single Output Board Mount AC-DC Power Supplies, what this ac/dc core brings to the table in terms of form factor, reliability and performance, and why this kind of solution may give you the flexibility you need to optimize your next design.

Click here for more information about CUI Inc PBO Single Output Board Mount AC-DC Power Supplies