feature article
Subscribe Now

Using Power and Integrity in the Same Sentence

Apache Provides Hierarchical Dynamic Power Integrity Analysis

Power is seductive. It has attracted the attention of universities, designers, tool vendors, journalists, and anyone who wants to be anyone in the silicon and systems worlds. Of course, unlike the situation in so many hallowed halls around the world, the aim here is to reduce power, not increase it (or gather it for oneself). Out of the limelight, however, is the stability of the power infrastructure: how robust is a chip’s power grid in the face of potentially wild gyrations of power draw as a chip is put through its paces? This is the realm of power integrity (two words that, in too many other contexts, have no place near each other), and, like signal integrity with respect to signal speed, it is all too often relegated to second-order status as compared to power.

There are a number of reasons why power integrity considerations are more important today than they once were. Back when chips were simpler conglomerations of data path and logic, there were obvious sources of noise that could be addressed in an ad hoc fashion. Heck, before that, transitions were slow enough that noise wasn’t even as much of an issue, period. Now there are numerous potential sources, and just blindly trying to fix things that might be a problem may not fix anything and will likely result in over-design. With multi-mode circuits, the range of power consumption and activity can vary widely; sharp changes in power can create noise, and the noise profile may vary depending on which mode the circuit is in. And, increasingly, chips are designed by larger teams of engineers, including ones not even officially on the team: those that designed the IP that may be purchased for use in the chip. Ownership of a particular problem is not at all clear.

Power integrity tools address these issues and are not a new concept. But until lately, if you wanted a hierarchical approach to power integrity, you had to use static analysis. If you wanted to use dynamic analysis, you had to use a flattened design. There have apparently been some attempts at a dynamic hierarchical approach, but they haven’t really been able to address all sources of noise and have hence been less accurate.

Apache has recently announced a dynamic hierarchical capability, RedHawk-NX, that they say is the first to have no loss of accuracy as compared to a flattened analysis. They do so in a way that doesn’t require a huge number of vectors and that can even take into account IP without giving away the contents of the IP. We can take a look at these characteristics by examining their database and looking at how they handle dynamic stimulation.

At first blush it would seem that a fully accurate power model would, by necessity, contain a complete set of information about the structure of the chip. After all, each transistor in transit can contribute to power, so each transistor must be considered. Leave out any part of the circuit and you’re no longer accurate. This, of course, isn’t what IP providers, in particular, like to hear. They’re loath to give out a full schematic of what they’ve worked hard at, assuming they’re providing a GDS-II rendition of their product. Even synthesizable blocks may be encrypted.

In order to address this, the Apache tool creates a database of power information for each element in the circuit without actually providing the details of how the circuit elements are built or interconnected. It is layout-aware, so the impact of physical proximity will be considered, but it doesn’t record that information in the model. Instead, the geometries and adjacencies and such are all used to create power parameters; those power contributions are stored in the database, and the layout information used to calculate them is then discarded. What results is a set of numbers that accurately reflect relevant power noise information without storing how that information was generated. This can be done, for example, by an IP provider based on their circuit, and then the model – sans the actual circuit information – can be provided to the IP customer for use in full-chip power integrity analysis.

The second element in RedHawk-NX is the creation of stimulus. They have eschewed the usual vector change dump (VCD) file, which is often used as an indication of signal transitions but apparently isn’t popular with backend engineers. As a friendlier alternative, the tool automatically searches the design, inferring state machine trajectories and signal flow, and identifies the transition(s) that create(s) the greatest number of signal transitions. This can then be used as the stimulus for analysis; it’s typically only a few vectors.

But aha, you say, you can do that only if you have full knowledge of the circuit, and we just saw that the database doesn’t contain that information. OK, then, do it block by block. But, you rejoin, the worst-case situation for a block may not be the worst full-chip case once the block is hooked into the other blocks. It’s only a “local” worst-case; the “global” worst-case may involve a situation that isn’t the worst-case for that block. So now what?

The answer is that there’s a bit more flexibility in setting up stimulus scenarios, and the setup of a full-chip analysis isn’t necessarily hands-off. A designer that owns a particular block – and this especially applies to IP creators – can create a number of worst-case vector sets. Different modes of the block can be analyzed, with a stimulus created for each one. Each of these modes can be named, and these stimulus sets are then provided with the block or IP. When assembling them and doing full-chip analysis, the integrator (or whoever gets the unenviable task of figuring out whether the individual bits play nicely together) can specify which mode or modes to use in the analysis.

When doing it this way, the entire chip isn’t automatically analyzed for a global worst-case; instead, an engineer picks and chooses modes in a manner that, presumably, results in meaningful worst-case analyses. So, again, IP can be provided in a manner that includes information that’s useful for analysis but doesn’t reveal the recipe for the secret sauce. As a result, the tool should be able to provide analysis that’s completely accurate, since it’s derived from a transistor-by-transistor circuit analysis; provide circuit anonymity where desired; and allow block-by-block hierarchical analysis so that the pieces and their contributions to the whole can be more easily studied, and any problems can be more easily fixed.

Leave a Reply

featured blogs
May 25, 2022
Explore the world of point-of-care (POC) anatomical 3D printing and learn how our AI-enabled Simpleware software eliminates manual segmentation & landmarking. The post How Synopsys Point-of-Care 3D Printing Helps Clinicians and Patients appeared first on From Silicon To...
May 25, 2022
There are so many cool STEM (science, technology, engineering, and math) toys available these days, and I want them all!...
May 24, 2022
By Melika Roshandell Today's modern electronic designs require ever more functionality and performance to meet consumer demand. These requirements make scaling traditional, flat, 2D-ICs very... ...
May 24, 2022
By Neel Natekar Radio frequency (RF) circuitry is an essential component of many of the critical applications we now rely… ...

featured video

EdgeQ Creates Big Connections with a Small Chip

Sponsored by Cadence Design Systems

Find out how EdgeQ delivered the world’s first 5G base station on a chip using Cadence’s logic simulation, digital implementation, timing and power signoff, synthesis, and physical verification signoff tools.

Click here for more information

featured paper

5 common Hall-effect sensor myths

Sponsored by Texas Instruments

Hall-effect sensors can be used in a variety of automotive and industrial systems. Higher system performance requirements created the need for improved accuracy and more integration – extending the use of Hall-effect sensors. Read this article to learn about common Hall-effect sensor misconceptions and see how these sensors can be used in real-world applications.

Click to read more

featured chalk talk

KISSLING Products: Rugged and Reliable Solutions

Sponsored by Mouser Electronics and TE Connectivity

Rugged and reliable designs today have a specific set of design requirements that may not be found in other industries including robustness, durability, and the ability to resist harsh environments. In this episode of Chalk Talk, Amelia Dalton chats with Mark Dickson from TE Connectivity about the KISSLING product family which includes a wide variety of rugged and reliable solutions for your next design.

Click here for more information about TE Connectivity / KISSLING Ruggedized Switching Products