feature article
Subscribe Now

Faster Space Exploration

Infiniscale Uses Behavioral Modeling to Design for Analog Yield

In yet another installment of how life has gotten complicated in the design-for-manufacturing (or design-for-yield) world, we re-enter the world of the modern designer as contrasted from those of yore. Erstwhile designers followed rules, and, assuming the designs passed their tests on the way from design to manufacturing, the designer could give him- or herself a well-deserved pat on the back, release a satisfied sigh, and move on to the next project. What happened to the design at that point was not of concern to the design engineer because the design had escaped the realm of design. Testing was handled by test engineers, and yield enhancement was handled by – you guessed it — yield enhancement engineers.

Things aren’t quite so cozy anymore, and designers are being held responsible for designs that actually yield. The issue is no longer one of making sure that the peak of the distribution lies within the specified window; the trick now is trying to cram as much of the distribution as possible – hopefully everything – inside that window. It’s bad enough with digital, where your basic parametrics are speed and power, but with analog, you may have any number of parameters that matter, relating to gain, phase, stability, etc. Keeping all of those parameters with tight spec distributions is no trivial task, and the number of parameters impacting optimization is growing way out of control.

What we have here is some serious multi-variable mathematics that must be optimized – multiple inputs and multiple outputs. And you thought scheduling airplanes was complicated. (Of course they manage to tolerate much less robustness in their schedules, but I digress.) Conceptually, it’s a multi-dimensional surface, and you’re trying to find simultaneous global maxima/minima (optima?) without getting lulled into some local optimum that pushes other parameters out of whack.

What typically happens is that simulation is used in an iterative process to locate an optimal solution on this mathematical surface. The surface has a huge number of points (ok, technically infinite, but huge from a practical standpoint), and the entire space can be explored to decide which point is best. Each point can be determined by running a simulation with a given set of values for each of the dimensions. Of course, these simulations can take a while to run, so while theoretically you can map every point on the surface to some satisfactory degree of precision, in practice that would take far too long. The other issue is that each time you want an answer to a question regarding some as-yet unexplored point on the surface, you have to wait for the simulation to complete before you know the answer, limiting the speed with which you can run “what if” scenarios.

Infiniscale has taken a different approach, one they call behavioral modeling. Their intent is to provide a more efficient means of optimizing the design of analog, RF, and mixed-signal models to improve yield. They still use simulation, but in a more limited manner, and in a way that allows the actual exploration of the design space to happen much more quickly.

First the range of each dimension is specified, and then the tool automatically runs a number of simulations across some number of points and then does a curve fit to create a mathematical model of the design space surface. They don’t say exactly how they create the curves, but it’s apparently not polynomial or posynomial (I’m not even going to pretend I didn’t have to look that one up). Obviously the more points simulated, the more accurate the model, but the longer it takes to generate. Users can then play with design scenarios, and with each point being explored, a simulation doesn’t have to be run: the point is simply calculated on the surface using the mathematical, or as they call it, behavioral model.

This technology is part of their overall Lysis tool suite. The TechModeler tool is the one that builds the model. It can use the output of any SPICE-like simulation tool that provides a comma-separated-values output. It automates the process of setting up the parameter values for each simulation, launching the simulation, recording the results, and then building the model.

Once built, this model can then be used by the TechAnalyzer tool, which provides Monte Carlo and sensitivity analysis. They claim that the equivalent of 100,000 runs can be handled in five seconds. A TechSizer tool is available to play with the sizing of components to identify corners and optimize performance. These tools have been on the market for up to two years.

They have just released a new tool, TechYielder, which optimizes the distributions of multiple parameters to provide the highest yield. While individual design space points can be queried, with results in a few minutes, the tool can also automatically determine a global optimum that will place, if possible, the entire process distribution within the yielding range. Even if the distribution is already high-yielding, it will attempt to find a solution that better centers, narrows, and normalizes the distribution.

Examples of the typical size range and content of blocks that can be optimized are analog-digital converters, voltage-controlled oscillators, bandgap regulators, filters, serdes circuits, and the like. The approach isn’t generally suited to full-chip optimization, since a full SoC will have multiple blocks, each having a set of performance specs. Instead, each block of the SoC, or perhaps library elements, are optimized and then assembled in the SoC.

With increased ability to model the effects of process variations, designers can optimize their circuits before ever signing them off. Rather than yield enhancement engineers trying to tweak things here and there to improve yields – a process that would seem somewhat dicey with sensitive analog circuitry – the design can be centered by the person most familiar with the circuit. And if the block is a cell or library module, then the benefits of that optimization can accrue to every circuit using that block, leveraging the optimization work much further.

Reviewing some of the materials from the earlier product, it feels like the behavioral modeling technology originally started as a means of looking at different data points without having to do a lot of new simulations, with progressively more automation being added since then. Looking at today’s big picture with the current offering, the TechYielder tool allows the design to be robust with respect to all the variation anticipated in the manufacturing process, and the TechAnalyzer and TechSizer tools allow robustness with respect to the variation in the use environment. If they deliver as promised, it certainly simplifies the process of trying to design around the increasing number of variables that must be considered.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

PolarFire® SoC FPGAs: Integrate Linux® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
23,058 views