Dante would feel right at home surveying the math required to create useful circuits. He might meet some argument as to whether he was observing hell or heaven, but there would be no disagreement on the levels one would encounter as one approached the deepest depths or highest heights.
At the first level, one finds the easy world of ones and zeros. George Boole governs this domain, and he holds dominion over a disproportionate swath of the landscape. Moving in a level brings us to the simple world of conservative law passive circuits. Voltage and current sources, resistors. Here can be found simple linear equations, albeit sometimes many simultaneously. Kirchhoff owns this sphere.
Moving further, add some active elements and you get decidedly non-linear equations. A variety of personalities administer this domain, as exemplified by the co-governors Ebers and Moll. Next we move to another deceptively benign-looking passive world, but here capacitors and inductors lurk, throwing off simplistic views of simple circuits. Now we must be able to morph from the time to the frequency domain, accounting for lags and leads and phase shifts. Calculus enters our calculations, and, just so as not to make things too easy, they assault us in the form of messy integrals. Here we pay homage to Laplace as regent.
And yet, complex as this sphere appears, it pales by comparison to our last and final stop: the world of fields. Not mere calculus, vector calculus. Bizarre notions of divs and curls. Partial differential equations. Ruled with an iron fist by the almighty Maxwell.
It’s no accident that the population of engineers inhabiting each sphere decreases as you approach the intimidating world of fields. Most do digital. Few do fields. And where fields are the focus of attention, it’s the PhDs and professors that are the go-to guys for the kinds of messy math required to make this work.
Problem is, this stuff is hard to solve analytically. Mostly impossible. So you have to do much of it numerically. And, no less important, you have to finish the solution sometime this century. Preferably during the time it takes me to go get some coffee. So numerous approaches have been devised to get a good-enough solution soon enough. Exactly what that means depends on the problem being solved and changes over time. Software that has the temerity to take on such problems is referred to as a field solver. And you’ve got lots of choice of technique and level of precision, as summarized in a brief paper by PhysWare*.
The field of ICs
In the IC world, there appear to be two places making use of field solvers: noise analysis and parasitic extraction. Which, to some extent, kinda seem like the same thing since they both provide a distributed network of parasitic elements that can then be used for simulation. Field solvers are required here because the physical arrangement of devices and wires, on various layers, separated by various dielectrics, determines how these elements affect each other.
The metal lines themselves cannot sustain an electric field; it is the dielectric within which the field resides. Understanding that field, based on the physical properties of the dielectric and the geometries of the structures, allows a model to be built consisting of capacitors, inductors, resistors and conductors. Armed with such a model, simulations can be performed that account for the specific physical interactions caused by the actual layout of the circuit.
The problem is, fields exist in three dimensions. (OK, perhaps there is one deeper layer where fields exist on, like, 6 dimensions on a Calabi-Yau manifold or some other such bizarreness, but we really haven’t been so evil in this life as to deserve that level of what must assuredly be hell.) Solving fully yields a solution for all variables at the same time. It’s a giant simultaneous equations problem – meaning that massive matrices are being manipulated. Which takes time. Too much time, say many.
If things are simple enough, like all dielectric layers having the same dielectric constant, then the matrices end up being relatively sparse – most entries are zero – and techniques that you and I learned and quickly forgot for solving two-ports and such can be extrapolated to industrial strength to bring about a timely solution. But, absent such a simplistic world, dense matrices result, slowing down calculation. This means that either some sort of simplification must be made, or the extent of the circuit being solved must be reduced to a tractable level – something smaller than a full chip.
And here is where you end up with a variety of compromise techniques. Amongst full solvers, there appear to be three “axes” that vary: the number of dimensions, the discretization approach, and the extent to which full wave behavior is accounted for. Two dimensions are obviously much easier to solve for than three. Depending on the problem being solved, the two dimensions may be a vertical cross-section or a horizontal element. Third-dimension components can sometimes be accounted for without going completely to a 3D solver; simpler approaches of modeling the third dimension are devised, giving rise to so-called 2.5D solvers.
As to how the shapes are modeled, be prepared to encounter arcane-sounding names like finite difference, finite element, and boundary element (or method-of-moments, which, I presume, if not solved fast enough, becomes method-of-hours). And I am nowhere near prepared to expose the level of ignorance that would immediately be apparent by the slightest attempt at plumbing the depths of these topics. This is why universities and grad students and professors were invented. I’m sure the proofs are trivial and left to the reader, so I’ll leave them to you.
As to whether or not you can cheat on solving full-wave behavior, apparently there are “quasi-static” solvers that can be useful at low frequencies (where, presumably, you’re closer to static), but most of the newer solvers appear to claim full-wave capability.
Then there is the other approach to the problem: a stochastic method, the so-called “random walk” technique. Completely different. To illustrate two state-of-the-art solvers, we’ll focus on a 2.5D solver from e-System for analyzing noise sensitivity and a random-walk solver from Silicon Frontline used for parasitic extraction.
Entering the field using two different approaches
e-System’s Madhavan Swaminathan, also a professor – surprise! – at Georgia Tech, walked through the process that they use in their Sphinx solver/simulator. Because it’s not a full 3D solver, they go through a number of steps to come up with a final set of matrices that get solved. They start with a 2D field view, where 2D is horizontal. They assume that between any two conductors there is only a single dielectric type; this accounts for all but a few very rare corner cases. From this a set of matrices is built that will yield a network of capacitors and inductors.
This model must then be adjusted to account for loss through the addition of resistors and conductors. As part of all of this, a transformation is made from the field world (Maxwell) to the circuit world (Kirchhoff); the next steps take place with the circuit formulation.
Matrices are then created to account for the coupling between layers – this is kind of the 0.5D approach to the vertical. Coupling can be a result of vias or capacitance.
Finally, the power grid is analyzed. This is done so that such things as return path discontinuities can be accounted for. e-System has a proprietary method of combining the power grid matrices with the signal line matrices to get one final set of honkin’ (but sparse) matrices that are then solved all together. And… ta-da! You have a model that can then be used for what they term signal/power co-simulation. They find that they can solve a complex circuit in about an hour, which, in this world, is rather fast.
e-System’s approach stands in contrast to the approach Silicon Frontline (SFT) uses for parasitic extraction. They use the random walk method, as described by SFT CEO Yuri Feinberg. One of the motivations of random walk is the fact that you can solve smaller problems more quickly than with a full field solver, which requires you to solve the entire circuit even if you want only a small subset of that information.
In addition, the “discretization” process for full solvers involves creating a mesh of geometries that approximates the area (2D) or volume (3D) being analyzed. A tighter mesh gives a more accurate result, but at the cost of more variables – the entire formulation of which must reside in memory at once. The random walk process doesn’t create a mesh per se, and the accuracy is strictly driven by the number of walks done. Since the walks are all independent of each other, only one must reside in memory at any given time, reducing the memory footprint for analyzing large circuits.
The basic idea of a walk is to create a random path from one random conductor to another (remembering that the field exists only in the dielectric and ends at any conductors). Solve that path. Do that over and over again so many times that you eventually start to fill the entire volume with these little paths and, “combining” the results of the individual calculations, end up with a solution for the entire volume. The more of these little paths you do, the tighter the fill – and the more accurate the solution. Note that this problem is very well suited to parallelization, allowing additional computing horsepower to be brought to the solution.
More specifically, the process works as follows. Randomly pick a conductor and some point on the surface of that conductor. From that point, create a cube – the “radius” of the cube will be either some maximum cube radius (which can be adjusted) or the distance to the nearest conductor, whichever is smaller. In other words, you don’t want the cube to go further than the conductor. Then randomly pick a point on the surface of the cube. This could take you in any direction, which is why you get a full 3D view.
In case you think the randomness is like simply rolling dice, let me give you a more specific articulation of how this is done as provided by SFT so you’ll understand why I simplify things just a bit: “The final point of the jump is selected randomly (using a Monte Carlo procedure) according to a probability distribution function set by the Green’s function of the Laplace equation for that cube.” ‘Nuf said.
If the new point is at the next conductor, the path is complete. If not, then create another cube from that point and randomly move to a spot on the surface of the new cube. Keep repeating with the cubes until you reach a conductor. At this point, the potential at the conductor at the end of the path can be statistically related to the electric field at the start of the path.
If the max cube radius is largish, the path can evolve quickly through thick parts of the dielectric, slowing down with smaller cubes as you approach a conductor. The effect is analogous to adjusting your timestep according to how fast a signal is switching when analyzing waveforms.
Because each move to a point on a new cube is random, these paths will be anything but straight, potentially moving in all kinds of directions as they meander from one conductor to another. But, after doing millions of them, you pretty much reach everywhere in the circuit. Or close enough to everywhere.
As to which of these techniques is better, well, I guess that depends on whom you talk to. Prof. Swaminathan agrees that random walk can be effective for very specific problems, especially if static, but, for more complex time-varying problems, the standard matrix approach must be used. SFT’s Chief Scientist Maxim Ershov concedes that past random walk technologies allowed only the calculation of total capacitance, but that SFT has made significant advances in random walk technology such that they can extract a distributed RC (or RLC) model that provides an accurate representation at high frequencies for large circuits, including a robust accounting of DFM and manufacturing effects like floating metal fill, CMP, litho effects, and non-standard metal shapes.
Of course, as often as not, ease of use and, to some extent, sales and marketing, can trump the details when it comes to technology like this whose underlying workings are more or less inaccessible to the user. So this is yet another one the market will have to vote on. That vote will presumably be based on which tool keeps the user as many levels from hell as possible.
*Note that contact information is required to download the paper.