feature article
Subscribe Now

Loss of Innocence

Modeling Analog Systems as Analog

Innocence is ignorance. And ignorance is bliss. The messy, inconvenient details and realities of the world can be an incredible buzzkill, and it’s just nicer to abstract them away as unnecessary refinements, third-order effects. Or perhaps nuisances that some specialist can be assigned to take care of.

The world is analog. We are repeatedly told that, as if by dint of exposure we will somehow acquire an appreciation for the madness that is unleashed by viewing the world through an analog lens. But it never happens. Most of us breathe a sigh of relief upon learning that the annoying analog bits can be converted to digital, preferably as early as possible, and then we can live in a magical happy digital land of 1s and 0s (and, of course, don’t-cares, but we don’t really care about those). For those inescapable bits of analog on the other side of the analog/digital divide, the digital hipsters can fortunately rely on that slightly suspicious social outcast, the analog designer, to do the really hard work.

But reality is intruding upon this simple arrangement. The design of ever more complex systems on a single chip, or of systems mixing a number of technologies and disciplines, increasingly requires that developers design and model in a way that combines both analog and digital, transcends the divide, and joins the parts as a whole. The analog portions, which have traditionally been resolved in the shadows and dark alleys through hushed conversations involving real numbers and calculus, are now being exposed to the full glare of the spotlight. Their practitioners stand shielding their eyes, blinking uncertainly, wary of impending taunts and demands that they trim their beards. Are they really being invited into polite company? Will the suspicious mystical awe they’ve been accorded be transformed into full equality? Can they relax in the knowledge that their time has come to bask in a new golden age?

If this is more than just a dream (or a nightmare, depending on your view), then the tools used for designing and modeling require much more than their digital counterparts have provided. In fact, there have been ways of designing Analog and Mixed Signal (AMS) systems for quite some time; they just haven’t been as visible as their digital counterparts. Some are being refined with new revisions such that, as of this printing, AMS versions of SystemC, VHDL, and Verilog are either available or under development. But designing analog systems requires a very different mindset than designing digital systems. This has greatly influenced the nature of the design languages and the engines used to simulate their behavior.

The nature of analog

As we venture into the analog world, we leave behind the straightforward digital concept of simply setting some abstract value, typically determined by a Boolean function, on a net or wire during some chunk of time, and inspecting the value during some different chunk of time. We enter a world of circuit networks, where various branches contribute real-valued currents or voltages, and where time is continuous. Operating points and steady-state become important. Small-signal analysis, transfer functions, Laplace transforms enter the vocabulary.

And the notion of conservative law systems becomes inescapable. A conservative law system is one where some element must be conserved. In the familiar electrical world, that element is charge. And the nature of charge conservation gives rise to the familiar (and sometimes forgotten) Kirchhoff laws for voltage and current. These state, respectively, that at any instant in time the sum of voltages around a loop must be zero and the sum of currents through a node must be zero. Violation of those laws implies that charge is being built up (or leaking away) somewhere. If that’s actually happening in a practical circuit (say through the charging of a parasitic capacitor), then it must be accounted for in the circuit (for example, by adding the capacitor), at which point the laws are no longer violated. But with a theoretical circuit, the laws simply cannot be violated, period. We cannot magically make charge appear or disappear.

The simulation of circuits using conservative law models amounts to solving a system of simultaneous equations: simple algebraic ones for steady-state and differential equations for time-varying signals. These equations are often referred to as Differential and Algebraic Equations (DAE) or Ordinary DAE (ODAE). The analog simulator must have an engine that can solve such systems of equations.

What’s particularly nice about the world of conservative law systems is that you can extend it to other non-electrical disciplines. Mass/energy conservation in general allows the simulation techniques to be applied to mechanical, optical, fluid, thermal, and even chemical systems. For example, it’s not much of a stretch to visualize Kirchhoff’s laws as applied to fluids in a pipe; at any junction of pipes, the sum of flows must be zero. (This might appear to apply to gasses as well, except that gasses compress, making it look like more is going in than is coming out).

In order to generalize Kirchhoff’s laws beyond the electrical domain, the concepts of voltage and current are generalized to potential and flow. So stated, Kirchhoff’s laws still hold: the sum of potentials in a loop must equal zero and the sum of flows at a node must equal zero.

How are you going to calculate that?

The actual calculation of these circuits, once the exclusive domain of SPICE-like programs, can be time-consuming. Depending on what level of simulation is performed, you may have far more precision than you need, in which case the time spent is not well spent. In addition, a mixed system simulation has to account for both digital and analog portions (and no one is proposing modeling the digital portions in a more “accurate” analog manner).

So at the most basic level, two engines, or “kernels,” are required: one for digital and one for analog. The digital kernel schedules and generates events in discrete time. The analog kernel is able to handle the continuous-time systems of equations. The overall system, helped by the actual models, keeps these two in synch. But there is also a middle-ground that helps simplify some calculations: it’s the so-called “signal flow” model, which can be continuous- or discrete-time. Unlike conservative law models, signal flow models don’t worry about the complete solution of both voltage and current; they just account for one or the other and have a direction associated with them.

This makes it easier to conceive of, for example, a signal flowing through a filter. You can model the relevant aspect of the signal without having to completely solve the entire system, and this saves computation. It also relieves you of a level of detail that might not be important in the early stages of a design but that can be refined later into a more accurate conservative law model when you’re simulating an actual implementation of the circuit in detail.

At the earliest phases of system design, where you’re modeling behavior, it’s important to simulate quickly, with less accuracy, so that you can try different approaches and do lots of “what if” analysis. Too much accuracy will bog down this simulation, so appropriate simplifications are needed. This is the domain of SystemC, for which AMS extensions are being proposed. In fact, not only might such a SystemC-AMS incorporate the two basic kernels for digital and analog, but it could also allow for other dedicated kernels for circuits where simplification is known to be safe. Two examples are for linear networks and synchronous dataflow. “Linear” is a subset of “non-linear,” more or less, so if you can rule out non-linear behavior, your calculations can be done much more quickly. Likewise, “synchronous” is a special case of “asynchronous,” and abandoning any concerns about asynchronous events can simplify and speed up the simulation of synchronous ones.*

The SystemC-AMS proposal allows components from four different domains to be interconnected through converters: electrical linear networks (ELN) and linear signal flow (LSF), which use DAE solvers that assume linear behavior and continuous time; timed data flow (TDF), which uses a discrete time model; and digital. Most of the models are pre-defined so that they can be assembled rapidly without worrying about the details of implementation.

VHDL-AMS can also allow varying levels of abstraction with corresponding ease of computation, all within a single model. VHDL separates the definition of the model interface from the model architecture, and within the architecture it’s possible to define multiple representations of the behavior of the model. One representation could be simplistic, comprehending only first-level effects, and quickly computed. At the other end of the spectrum, a detailed precise model could be included that accounts for secondary and tertiary effects but takes longer to solve. These two could be housed in the same model, with a parameter specifying which configuration to use for a given simulation.

The other real-world aspect of computing that interferes with the ultra-real-world aspects of real numbers and continuous time is the fact that iterative computing methods are used to solve the DAE at a given point in time. That means there will be some small error, and the engine will iterate, hopefully converging, until that error gets small enough. And, of course, strictly speaking, a real point in time is infinitesimal; true continuous time can’t be simulated. So for the purposes of computing, we have to specify both tolerances that define when convergence is “good enough” and time increments that define how often the computation will occur.

Two conservative views

When it comes to representing conservative law behavior, Verilog and VHDL take different approaches. It seems mostly to be a tomayto/tomahto thing, although I assume there are adherents willing to put up a passionate defense of one approach or the other. Both define circuit branches that intersect; Verilog refers to the intersections as “nodes,” VHDL as “terminals.”

Verilog uses a so-called “source/probe” approach to expressing analog behavior. This seems a bit confusing at first, but it isn’t too bad once you sort it out. The different kinds of analog systems (electrical, energy, etc.) are defined in “disciplines,” within which the “nature” of the system is set up. Actually, disciplines can be created as conservative law, where both potential and flow are defined, or as signal flow, where either potential or flow is defined, but not both. Verilog allows you to define your own disciplines through a “nature” definition, so if a system isn’t already provided in the library, you can create your own. The items defined in the discipline include the units of potential and/or flow, default tolerances, and, importantly, the so-called “access functions” through which you set and get the values of the potential and/or flow.

With an electrical system, you use the electrical discipline, which defines Voltage as the potential, Current as the flow, and defines access functions V() and I() for getting or setting the values at a particular node or branch. When getting the value (typically on the right hand side of an expression), the access function acts as a probe. When setting the value (typically on the left hand side of an expression), it acts as a source. And setting the value is done additively through a series of contributions. For example, if the current through a branch consists of a couple DC components, an AC component, and some noise, multiple expressions are defined using the contribution operator “<+”, each of which adds a component to the current. The sum total of all the contributions is the final current. The order in which the expressions are written is important, since the simulator executes them sequentially.

Verilog allows special “probes,” which have no current contribution and appear only on the left side of expressions. If the current access function is used in an expression, then the voltage is assumed to be zero; if the voltage access function is used in an expression, then the current is assumed to be zero. You can’t use both the voltage and current of a probe in expressions.

Verilog also provides two special operators for handling time-varying calculus: the ddt() (time derivative) and idt() (time integral) functions. Equations can be expressed in different forms depending on which order derivative or integral you want to use. Not surprisingly, forms using the derivative are often preferred (come on, can you honestly say you preferred doing integrals in college, except when you were trying to prove how clever you were?). In addition, higher-order derivatives are apparently less accurate, so first derivatives are preferred, and if a second derivative is needed, you can use ddt(ddt()).

VHDL uses a more general approach; there are no pre-defined disciplines or anything like that. Instead, a branch is simply defined as connecting two terminals, with a potential across the branch and a flow through the branch; “across” and “through” are keywords for these definitions. Then a set of equations is written and, unlike Verilog, where execution is sequential, in VHDL the statements are simultaneous and are solved together. Instead of using operators for calculus, two attributes are defined: “`dot” for time derivatives and “`integ” for time integers.

Although these two ways of describing analog behavior seem pretty different, they can both be used to describe the same analog phenomena, and it’s just a matter of adjusting your thinking to visualize things one way or the other.

Revisions underway

There’s a lot of AMS language activity underway, as made evident by the proliferation of “AMS” suffixes at DAC this year. VHDL released an AMS revision in 2007 (IEEE P1076.1-2007). In June of this year, the Open SystemC Initiative (OSCI) released a whitepaper proposing AMS extensions to SystemC. The discussions are ongoing, with a goal to release a public draft later this year. Meanwhile, the Accellera board approved Verilog-AMS 2.3 in August. Verilog is a bit more complicated since the official digital Verilog is an IEEE standard (1364), while the AMS version is not, and, in fact, the digital portions of the older Verilog-AMS 2.2 version were not in synch with the IEEE version. This has been harmonized in the latest revision. The next steps are to get the AMS extensions integrated into SystemVerilog and to define AMS extensions for assertions and behavioral modeling. Once those are done, the result will be submitted to IEEE for adoption.

*I’m sure some of you lexicologists would love to crucify me for defining words as being special cases of their antonyms. Strictly speaking that could never be true. But since there are no appropriate container words that include both the words and their antonyms, I’m just gonna have to roll with this and call it good.

Useful links:
SystemC AMS extensions whitepaper
Verilog-AMS 2.3 Language Reference Manual
(VHDL-AMS spec must be purchased from IEEE)
VHDL-AMS and Verilog-AMS as alternative hardware description languages for efficient modeling of multidiscipline systems (A case study including electrical, mechanical, thermal, optical, and chemical disciplines, as well as a table comparing VHDL-AMS and Verilog-AMS)

Leave a Reply

featured blogs
Oct 9, 2024
Have you ever noticed that dogs tend to circle around a few times before they eventually take a weight off their minds?...

featured chalk talk

Power Modules and Why You Should Use Them in Your Next Power Design
In this episode of Chalk Talk, Amelia Dalton and Christine Chacko from Texas Instruments explore a variety of power module package technologies, examine the many ways that power modules can help save on total design solution cost, and the unique benefits that Texas Instruments power modules can bring to your next design.
Aug 22, 2024
26,030 views