feature article
Subscribe Now

Uncanny Resemblances

Synopsys Announces Base Curve Compaction for CCS Models

If you had all the time in the world, you could simulate an entire SoC using SPICE, but you don’t, so you can’t. At least not for digital circuits; analog is different, since detailed analysis is required there, and it’s not a billion transistors. And yet, even with digital, we can’t quite revert all the way to 1s and 0s, but we can start to use some abstraction in the form of library cells for basic circuit chunks like transistors, inverters, gates, and flip-flops. Those cells can be characterized using SPICE (and/or physical measurement), and, from that information, models can be built that higher-level tools can use to help determine the delay and/or power and/or noise characteristics of your circuit. But any abstraction, pretty much by definition, means you give up some accuracy; as long as that sacrifice is small, it’s a reasonable price to pay.

For the purposes of figuring out how long it took a signal to get through a gate, one used to use a pretty high level of abstraction. Pick a voltage, slew rate, and load, and look up the delay in a table. It doesn’t completely abstract away the analog (there is a slew rate, after all), but damn near. That was the old non-linear delay model (NLDM) method. Problem is, it gets less and less accurate at the more aggressive technology nodes. So we need to move back a bit towards the analog realm, giving up some abstraction. A benefit of abstraction is doing more, more quickly, and with less data, so giving up abstraction means more data and slower. More on that later.

Back in simpler times, you could model a cell simply as a driver attached to a lumped load. That doesn’t work anymore; you really need to model a driver, an interconnect network, and a receiver. One of the main problems is that interconnect impedance is getting higher and higher. But if that impedance gets too high as compared to the driver impedance, then the voltage divider created by the driver impedance and the network is completely dominated by the network, and the output response will look just like the input voltage regardless of the interconnect… Which doesn’t sound right at all.

There are other problems. When trying to determine a single equivalent capacitance to use at the receiver, it’s hard to find one value that gives both the right transition time and the right delay for rising edges and falling edges (although modifications to the older methods did allow some swapping of capacitances depending on switching direction). This is compounded when the Miller effect, where the input capacitance is multiplied by the gain of the cell, is significant. Input functions as simple ramps are also not accurate and can provide simulation results that are too optimistic.

One solution to this that came and went was the scalable polynomial delay model (SPDM), which attempted to use polynomials to model cell response more accurately. According to Synopsys’ Robert Hoogenstryd, the issue here was the amount of work required to generate the models – apparently the curve-fitting work was onerous – and this never really got any traction. Another approach was going to be needed.

So let’s step back a second. We’ve found that simply trying to track a switching voltage as it ramps across a threshold isn’t accurate enough. The fine-level dynamics make the ramp non-linear, may actually perturb the threshold during the transition, and the effective capacitances may have time-varying characteristics as well. And all this to model the voltage behavior. But what causes a node to change voltage? It’s the charge being delivered to the node via a current. OK, that’s kind of redundant since, by definition, the only way charge gets delivered is via a current. But you know what I mean.

The point is, the things that are hard to figure out when playing only with voltage are the things that result from changes in the way current flows. Voltage is merely the effect; current is the cause. If you stick with the current as the central feature, you can always figure out the voltage at a given time as long as you know the capacitances and then sum up all the charge that gets deposited into the capacitors. If you only know the voltage at a given time, you can’t necessarily go backwards and figure out exactly how the charge got there.

The result is current-based modeling. Cells are modeled with current sources instead of voltage sources. And, unlike the NLDM approach, where tables are built with look-up values in each entry, here tables are built with a waveform in each entry. For each slew/load combination, you end up with the time-varying characteristics of the current. Practically speaking, when characterizing this curve, it is measured or simulated very accurately, and then a piece-wise linear approximation is created (adaptively sampling to use more points where things are changing quickly, fewer where they aren’t) and stored in the library.

There are actually two manifestations of this approach. An earlier one, called ECSM and championed by Cadence, actually retains its voltage-based approach “externally” – that is, a voltage is applied to the cell and a voltage waveform is captured even though current dynamics control the calculations. Cadence claims that this simplifies characterization since voltage is easier to measure and control than current. Meanwhile, Synopsys favors their own CCS technology, in which everything is represented as a current, with voltages created through integration as needed during simulation when the cell is used. They claim that there are times when current is required, and if all you have is voltage, then you have to go backwards to approximate what the current is – less accurate than having the original current measurements and calculating an accurate voltage where needed.

You might reasonably ask where current is needed. After all, if an accurate voltage waveform is presented that represents the internal current dynamics correctly, then wouldn’t that be an easier way to model delay? Perhaps. Although more modern simulators actually use current to calculate delays. Even so, these models are also used for noise and power calculations, the latter of which, in particular, is almost exclusively a current-oriented phenomenon. On the other hand, is the ECSM approach “good enough”? We’ll let the market hash that one out; that’s not the goal here.

In fact, all of this is really background to what’s new, since even CCS has been around for a few years. Whether you use a voltage- or current-based waveform as input and output, the basic change from NLDM models is that a single value in a table has been replaced with a waveform. Even though the waveform has been simplified as a piecewise linear curve, you have still replaced a simple value lookup with several values describing a curve. In other words, the amount of data needed to represent the library has shot up.

In addition to data storage, calculation time can be substantial when looking up waveforms. It may sound silly that a lookup should take much time, and, in fact, if the value you’re using to index into the table or into a waveform happens to match exactly a value that is explicitly in the table or waveform, then you get your result quickly and you’re on your way. That’s not usually the case, however; usually you have a value that’s between two stored points, so now you have to interpolate. Algorithms exist for that, but they take time.

Synopsys recently announced that they’ve applied what they term “base curve” compaction to their CCS libraries, and that this has been adopted into the standard Liberty format used for cell libraries. Rather than storing all the waveforms explicitly, they noticed that there were a number of fundamental wave “shapes” (at the risk of oversimplifying) to which all other waveforms could be related by a series of simple offset parameters.

They actually found that they could get more similarities between curves if they went from current vs time curves to normalized I-V base curves. An actual I-V curve can then be described by a reference to a base curve and four critical elements: the starting current, peak current, peak voltage, and time-to-peak for each actual curve – five numbers in total. In some cases, two base curves are used per I-V curve, one for the left half, one for the right half. Without this compaction, the ten or so data points for the curve have to be stored, and, since those are adaptively sampled (rather than being taken at fixed known points), both coordinates of each point have to be saved, meaning twenty (or so) numbers have to be stored. Five numbers vs twenty: you be the judge.

Note that there is no intent to imply any relationship between a base curve and another curve that can be derived mathematically from it; the normalized base curves can be stored in a base curve database with no indication of where they came from. So a particular curve from one cell may be associated with a base curve derived from a completely different cell. That’s not to suggest anything causative there – it’s strictly a mathematical convenience. It’s like noting that Leonard Cohen looks like Dustin Hoffman or that Slovenia looks just like a running chicken; it may be true, but it doesn’t mean they’re related.

This has reduced the amount of data that needs to be stored by about 75% in addition to speeding up the time required to interpolate; they’ve seen Primetime run as much as 60% faster. So even though the simulations are now done with the same accuracy as before – and with much more accuracy than was available using the NLDM approach – results can now be achieved more quickly. Having been added to the Liberty format, it is now available for download and general use within a variety of tools.

Links:
CCS
ECSM

Leave a Reply

featured blogs
Dec 8, 2023
Read the technical brief to learn about Mixed-Order Mesh Curving using Cadence Fidelity Pointwise. When performing numerical simulations on complex systems, discretization schemes are necessary for the governing equations and geometry. In computational fluid dynamics (CFD) si...
Dec 7, 2023
Explore the different memory technologies at the heart of AI SoC memory architecture and learn about the advantages of SRAM, ReRAM, MRAM, and beyond.The post The Importance of Memory Architecture for AI SoCs appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

Inductive Position Sensors for Motors and Actuators
Sponsored by Mouser Electronics and Microchip
Hall effect sensors have been quite popular for a variety of applications for many years but inductive positions sensors can provide better accuracy, better noise immunity, can cost less,  and can reject stray magnetic fields. In this episode of Chalk Talk, Amelia Dalton chats with Mark Smith from Microchip about the multitude of benefits that inductive position sensors can bring to automotive, robotic and industrial applications. They also check out the easy to use kits that can help you get started using them for your next design.
Dec 19, 2022
41,888 views