feature article
Subscribe Now

Uncanny Resemblances

Synopsys Announces Base Curve Compaction for CCS Models

If you had all the time in the world, you could simulate an entire SoC using SPICE, but you don’t, so you can’t. At least not for digital circuits; analog is different, since detailed analysis is required there, and it’s not a billion transistors. And yet, even with digital, we can’t quite revert all the way to 1s and 0s, but we can start to use some abstraction in the form of library cells for basic circuit chunks like transistors, inverters, gates, and flip-flops. Those cells can be characterized using SPICE (and/or physical measurement), and, from that information, models can be built that higher-level tools can use to help determine the delay and/or power and/or noise characteristics of your circuit. But any abstraction, pretty much by definition, means you give up some accuracy; as long as that sacrifice is small, it’s a reasonable price to pay.

For the purposes of figuring out how long it took a signal to get through a gate, one used to use a pretty high level of abstraction. Pick a voltage, slew rate, and load, and look up the delay in a table. It doesn’t completely abstract away the analog (there is a slew rate, after all), but damn near. That was the old non-linear delay model (NLDM) method. Problem is, it gets less and less accurate at the more aggressive technology nodes. So we need to move back a bit towards the analog realm, giving up some abstraction. A benefit of abstraction is doing more, more quickly, and with less data, so giving up abstraction means more data and slower. More on that later.

Back in simpler times, you could model a cell simply as a driver attached to a lumped load. That doesn’t work anymore; you really need to model a driver, an interconnect network, and a receiver. One of the main problems is that interconnect impedance is getting higher and higher. But if that impedance gets too high as compared to the driver impedance, then the voltage divider created by the driver impedance and the network is completely dominated by the network, and the output response will look just like the input voltage regardless of the interconnect… Which doesn’t sound right at all.

There are other problems. When trying to determine a single equivalent capacitance to use at the receiver, it’s hard to find one value that gives both the right transition time and the right delay for rising edges and falling edges (although modifications to the older methods did allow some swapping of capacitances depending on switching direction). This is compounded when the Miller effect, where the input capacitance is multiplied by the gain of the cell, is significant. Input functions as simple ramps are also not accurate and can provide simulation results that are too optimistic.

One solution to this that came and went was the scalable polynomial delay model (SPDM), which attempted to use polynomials to model cell response more accurately. According to Synopsys’ Robert Hoogenstryd, the issue here was the amount of work required to generate the models – apparently the curve-fitting work was onerous – and this never really got any traction. Another approach was going to be needed.

So let’s step back a second. We’ve found that simply trying to track a switching voltage as it ramps across a threshold isn’t accurate enough. The fine-level dynamics make the ramp non-linear, may actually perturb the threshold during the transition, and the effective capacitances may have time-varying characteristics as well. And all this to model the voltage behavior. But what causes a node to change voltage? It’s the charge being delivered to the node via a current. OK, that’s kind of redundant since, by definition, the only way charge gets delivered is via a current. But you know what I mean.

The point is, the things that are hard to figure out when playing only with voltage are the things that result from changes in the way current flows. Voltage is merely the effect; current is the cause. If you stick with the current as the central feature, you can always figure out the voltage at a given time as long as you know the capacitances and then sum up all the charge that gets deposited into the capacitors. If you only know the voltage at a given time, you can’t necessarily go backwards and figure out exactly how the charge got there.

The result is current-based modeling. Cells are modeled with current sources instead of voltage sources. And, unlike the NLDM approach, where tables are built with look-up values in each entry, here tables are built with a waveform in each entry. For each slew/load combination, you end up with the time-varying characteristics of the current. Practically speaking, when characterizing this curve, it is measured or simulated very accurately, and then a piece-wise linear approximation is created (adaptively sampling to use more points where things are changing quickly, fewer where they aren’t) and stored in the library.

There are actually two manifestations of this approach. An earlier one, called ECSM and championed by Cadence, actually retains its voltage-based approach “externally” – that is, a voltage is applied to the cell and a voltage waveform is captured even though current dynamics control the calculations. Cadence claims that this simplifies characterization since voltage is easier to measure and control than current. Meanwhile, Synopsys favors their own CCS technology, in which everything is represented as a current, with voltages created through integration as needed during simulation when the cell is used. They claim that there are times when current is required, and if all you have is voltage, then you have to go backwards to approximate what the current is – less accurate than having the original current measurements and calculating an accurate voltage where needed.

You might reasonably ask where current is needed. After all, if an accurate voltage waveform is presented that represents the internal current dynamics correctly, then wouldn’t that be an easier way to model delay? Perhaps. Although more modern simulators actually use current to calculate delays. Even so, these models are also used for noise and power calculations, the latter of which, in particular, is almost exclusively a current-oriented phenomenon. On the other hand, is the ECSM approach “good enough”? We’ll let the market hash that one out; that’s not the goal here.

In fact, all of this is really background to what’s new, since even CCS has been around for a few years. Whether you use a voltage- or current-based waveform as input and output, the basic change from NLDM models is that a single value in a table has been replaced with a waveform. Even though the waveform has been simplified as a piecewise linear curve, you have still replaced a simple value lookup with several values describing a curve. In other words, the amount of data needed to represent the library has shot up.

In addition to data storage, calculation time can be substantial when looking up waveforms. It may sound silly that a lookup should take much time, and, in fact, if the value you’re using to index into the table or into a waveform happens to match exactly a value that is explicitly in the table or waveform, then you get your result quickly and you’re on your way. That’s not usually the case, however; usually you have a value that’s between two stored points, so now you have to interpolate. Algorithms exist for that, but they take time.

Synopsys recently announced that they’ve applied what they term “base curve” compaction to their CCS libraries, and that this has been adopted into the standard Liberty format used for cell libraries. Rather than storing all the waveforms explicitly, they noticed that there were a number of fundamental wave “shapes” (at the risk of oversimplifying) to which all other waveforms could be related by a series of simple offset parameters.

They actually found that they could get more similarities between curves if they went from current vs time curves to normalized I-V base curves. An actual I-V curve can then be described by a reference to a base curve and four critical elements: the starting current, peak current, peak voltage, and time-to-peak for each actual curve – five numbers in total. In some cases, two base curves are used per I-V curve, one for the left half, one for the right half. Without this compaction, the ten or so data points for the curve have to be stored, and, since those are adaptively sampled (rather than being taken at fixed known points), both coordinates of each point have to be saved, meaning twenty (or so) numbers have to be stored. Five numbers vs twenty: you be the judge.

Note that there is no intent to imply any relationship between a base curve and another curve that can be derived mathematically from it; the normalized base curves can be stored in a base curve database with no indication of where they came from. So a particular curve from one cell may be associated with a base curve derived from a completely different cell. That’s not to suggest anything causative there – it’s strictly a mathematical convenience. It’s like noting that Leonard Cohen looks like Dustin Hoffman or that Slovenia looks just like a running chicken; it may be true, but it doesn’t mean they’re related.

This has reduced the amount of data that needs to be stored by about 75% in addition to speeding up the time required to interpolate; they’ve seen Primetime run as much as 60% faster. So even though the simulations are now done with the same accuracy as before – and with much more accuracy than was available using the NLDM approach – results can now be achieved more quickly. Having been added to the Liberty format, it is now available for download and general use within a variety of tools.

Links:
CCS
ECSM

Leave a Reply

featured blogs
Jul 29, 2021
Circuit checks enable you to analyze typical design problems, such as high impedance nodes, leakage paths between power supplies, timing errors, power issues, connectivity problems, or extreme rise... [[ Click on the title to access the full blog on the Cadence Community sit...
Jul 29, 2021
Learn why SoC emulation is the next frontier for power system optimization, helping chip designers shift power verification left in the SoC design flow. The post Why Wait Days for Results? The Next Frontier for Power Verification appeared first on From Silicon To Software....
Jul 28, 2021
Here's a sticky problem. What if the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries?...
Jul 9, 2021
Do you have questions about using the Linux OS with FPGAs? Intel is holding another 'Ask an Expert' session and the topic is 'Using Linux with Intel® SoC FPGAs.' Come and ask our experts about the various Linux OS options available to use with the integrated Arm Cortex proc...

featured video

Vibrant Super Resolution (SR-GAN) with DesignWare ARC EV Processor IP

Sponsored by Synopsys

Super resolution constructs high-res images from low-res. Neural networks like SR-GAN can generate missing data to achieve impressive results. This demo shows SR-GAN running on ARC EV processor IP from Synopsys to generate beautiful images.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

Configure the charge and discharge current separately in a reversible buck/boost regulator

Sponsored by Maxim Integrated

The design of a front-end converter can be made less complicated when minimal extra current overhead is required for charging the supercapacitor. This application note explains how to configure the reversible buck/boost converter to achieve a lighter impact on the system during the charging phase. Setting the charge current requirement to the minimum amount keeps the discharge current availability intact.

Click to read more

featured chalk talk

FPGAs Advance Data Acceleration in the Digital Transformation Age

Sponsored by Achronix

Acceleration is becoming a critical technology for today’s data-intensive world. Conventional processors cannot keep up with the demands of AI and other performance-intensive workloads, and engineering teams are looking to acceleration technologies for leverage against the deluge of data. In this episode of Chalk Talk, Amelia Dalton chats with Tom Spencer of Achronix about the current revolution in acceleration technology, and about specific solutions from Achronix that take advantage of leading-edge FPGAs, design IP, and even plug-and-play accelerator cards to address a wide range of challenges.

Click here for more information