feature article
Subscribe Now

Showing Your True Corners

Solido Helps Analog Designers Cope with Process Variation

A lot has been written about the increasing difficulty of optimizing a design as process dimensions have become increasingly minute. Not only is it harder to balance performance against area, but power must be considered as well. Managing yield is a constant struggle since it’s no longer a question of where to cut off a distribution tail: it’s a question of how to fix the distribution so that you don’t over- or under-design your product. Too sloppy and you lose a lot of yield; too rigid and you will chew up too much silicon.

As difficult as this is, most of the attention has been focused on digital. It’s even harder for the poor analog folks, for whom “performance” may have diverse meanings according to the intent of the circuit. In the digital world, performance means “speed.” But performance in an analog circuit might include things like gain or phase margin or signal-to-noise ratio (SNR) or bizarre-sounding beasts like “spurious-free dynamic range” (SFDR). Easy for them to say…

Part of figuring out your distribution is figuring out the extent of performance: how bad or good can it get? There is a particular combination of process points that will give you worst-case and best-case performance points. In the digital world, where speed rules, this is evaluated by applying combinations of variations that cause your transistors (N-channel and P-channel) to be either fast or slow. There are two transistor types, giving you two variables, meaning you get four combinations, typically denoted as FF (Fast-Fast), FS (Fast-Slow), SF, and SS. These are the process corners. Somewhere in between is the “nominal” or “typical” point.

You can think of these points as defining the corners of the sandbox within which you will play. Instead of having to sweep across a wide range of process settings, you can just work at the corners to figure out where the worst case is; this speeds up simulation tremendously since you’ve reduced a “very-very-very-many point” problem to a 4- or 5-point problem. However, the process settings that define the corners for digital performance may not necessarily be the same as those defining the corners for various analog performance metrics. Just because both N-channel and P-channel transistors are as fast as possible doesn’t necessarily mean that’s the point of best SNR for a circuit; there may be a completely different process point that acts as a corner for SNR.

What this has meant for analog designers is that they’ve had to run what are called Monte Carlo simulations. Consider this to be a variation on “enough monkeys on typewriters will eventually type up [put your favorite large impressive work of literature here].” And it’s worse than that: it’s more like, “enough monkeys on typewriters will eventually type up everything that could ever be typed up.” Which takes longer than just coming up with your favorite classic read. The monkeys have to come up with everyone’s favorite classic read – and their most hated classic reads – and the non-classic ones too. (With apologies to you youngsters that don’t know what a typewriter is; go find one in a museum. And for those of you that don’t know what a museum is, go find one in Wikipedia.)

In practical terms, this means taking random different process values and simulating the desired parameters with those different combinations. You do enough combinations that you feel confident that you’ve pretty much filled in the solution space so that you can see the range of possible values for those parameters. Think of it as trying to figure out the extent of a shape. If you know the shape is a square, then four points, one on each edge (whether or not they’re actually on the corners), will tell you everything you need to know. If the shape is an amoeba, on the other hand, you have to get a lot of points down before you start to have confidence that you know where the amoeba starts and stops. (Never mind the fact that all the time you’re calculating where the amoeba is, it’s moving…)

You can well imagine that this could take a long time if all of your analysis runs require such an approach; your monkeys (and/or your amoeba) may well die before the job is done. In fact, this may be skipped much of the time because there’s just too much pressure to get the circuit shipped. By omitting this analysis, you run the risk of a low-yielding product. So, consistent with a Monte Carlo analogy, you’re counting on Lady Luck to see you through.

Solido is putting forward a set of tools to help optimize analog circuits in a manner that doesn’t require repeated Monte Carlo runs. It’s based on a platform, called Variation Designer, that has access to your design, your SPICE models, and the computer(s) you use for simulation. You can then layer a number of specific tools on top of this.

Most fundamental of these tools is one that allows you to discover the “true” corners of your design. By “true,” they mean the ones that apply to your design and your parameters, not the digital corners, which aren’t particularly useful. Accurate assessments of those corners do require a Monte Carlo run, but you run it once, get the corners, and then use the corners for the remaining analysis.

Other types of analysis in their statistical package allow you to run the corners, sweep the design variables, find sensitive devices, analyze mismatch, and verify high sigma designs. Additional modules will be available in the future to solve new problems as processes get even more complex. The intent of the modules is to analyze the circuit and identify problems automatically, leaving the fix to the engineer. They claim, and especially in an area like analog, it’s entirely believable, that any automated attempts to fix a circuit would be met with some skepticism.

They’re not actually providing a simulator; they’re hijacking the simulator you already use and just setting up the runs and managing the data. Their integration is tightest with Cadence, but they work with others as well. They can accommodate parallel simulation to make use of as many computers as you have available to crunch all of the data.

Ultimately, if everything pans out as they expect, this tool should make it easier for designers to balance their design for good yield and good performance across the widely varying process that will be used to build it.

Leave a Reply

featured blogs
Jun 22, 2021
Have you ever been in a situation where the run has started and you realize that you needed to add two more workers, or drop a couple of them? In such cases, you wait for the run to complete, make... [[ Click on the title to access the full blog on the Cadence Community site...
Jun 21, 2021
By James Paris Last Saturday was my son's birthday and we had many things to… The post Time is money'¦so why waste it on bad data? appeared first on Design with Calibre....
Jun 17, 2021
Learn how cloud-based SoC design and functional verification systems such as ZeBu Cloud accelerate networking SoC readiness across both hardware & software. The post The Quest for the Most Advanced Networking SoC: Achieving Breakthrough Verification Efficiency with Clou...
Jun 17, 2021
In today’s blog episode, we would like to introduce our newest White Paper: “System and Component qualifications of VPX solutions, Create a novel, low-cost, easy to build, high reliability test platform for VPX modules“. Over the past year, Samtec has worked...

featured video

Reduce Analog and Mixed-Signal Design Risk with a Unified Design and Simulation Solution

Sponsored by Cadence Design Systems

Learn how you can reduce your cost and risk with the Virtuoso and Spectre unified analog and mixed-signal design and simulation solution, offering accuracy, capacity, and high performance.

Click here for more information about Spectre FX Simulator

featured paper

An FPGA-Based Solution for a Graph Neural Network Accelerator

Sponsored by Achronix

Graph Neural Networks (GNN) drive high demand for compute and memory performance and a software only based implementation of a GNN does not meet performance targets. As a result, there is an urgent need for hardware-based GNN acceleration. While traditional convolutional neural network (CNN) hardware acceleration has many solutions, the hardware acceleration of GNN has not been fully discussed and researched. This white paper reviews the latest GNN algorithms, the current status of acceleration technology research, and discusses FPGA-based GNN acceleration technology.

Click to read more

featured chalk talk

Silicon Lifecycle Management (SLM)

Sponsored by Synopsys

Wouldn’t it be great if we could keep on analyzing our IC designs once they are in the field? After all, simulation and lab measurements can never tell the whole story of how devices will behave in real-world use. In this episode of Chalk Talk, Amelia Dalton chats with Randy Fish of Synopsys about gaining better insight into IC designs through the use of embedded monitors and sensors, and how we can enable a range of new optimizations throughout the lifecycle of our designs.

Click here for more information about Silicon Lifecycle Management Platform