feature article
Subscribe Now

Moving Beyond Simulation

Traditionally, simulation-based dynamic verification techniques — think directed tests, constrained-random simulation and the like — have collectively been the workhorse of functional verification. But it’s fair to say that this horse is straining under the load and in need of some relief. The source of this strain: increasing SoC integration and overall complexity. This, in turn, drives up verification costs. Software licenses, hardware servers and staff to set up and run regression environments all cost real money, which of course remains scarce given the tepid economy. Even with unlimited resources, due to that increasing design complexity, more and more code remains effectively beyond the reach of simulation. 

Luckily relief is available in the form of various static verification techniques. One such technique is formal verification, a systematic process of ensuring, through exhaustive algorithmic techniques, that a design implementation satisfies initial requirements and specs [1]. Instead of stimulating the design to observe its behavior, formal verification relies on heavy doses of math to analyze all possible executions. Formal verification often works well with assertion-based verification [2]. Once a specific behavior is captured with a property language such as PSL or SVA, formal verification can be used to crawl even into the darkest nooks and crannies of the code.  

The other useful technique is automatic formal check, which is especially useful for legacy designs that do not have assertions. As its name implies, automatic formal hands the task of looking for design issues over to a bot of sorts. Automation, available in a handful of tools from leading EDA vendors, makes formal verification accessible to designers who are not yet ready to write properties. Here’s how it works, in a nutshell: leveraging a set of pre-defined assertion rules, automatic formal check analyzes the RTL structure of the design and characterizes its internal states. Then it identifies and checks for those undesired behaviors in the design.

Any Google search reveals loads of resources on formal verification. So in this article, I am going to describe two areas where automatic formal check, a relative newcomer, can be used to supplement dynamic simulation: Dead-Code Identification (DCI) and X-Semantic Check (XSC). My hope is that this information enables designers to find better ways to rewrite the RTL code early in the design phase of the project and allows verification engineers to identify those adverse effects in the regression.

Dead Code Identification

Those of us who work on static verification are occasionally accused of monomania from our colleagues for our single-minded pursuit of a technique many consider to be difficult and obscure. So, before going further, let me state unequivocally: dynamic simulation is essential for verifying design functionality. Indeed, simulation makes possible an understanding of coverage, undoubtedly the single most important measure of verification completeness.

The coverage space for today’s designs is a multi-dimensional, orthogonal set of metrics [3]. It can be a white-box metric measuring the coverage inside the design or it can be a black-box metric measuring the end-to-end behavior regardless how the design is represented. For instance, statement and condition coverage are examples of an implicit white-box metric that can automatically be derived from the RTL model. Scoreboard is an example of a higher-level explicit black-box metric that ignores the implementation detail of the design; it can be used even when the design is represented at different levels of abstraction.

Today, when constrained random simulation fails to achieve the targeted coverage goal, engineers have to fine tune the environment or add new tests. These efforts, often attempted relatively late in the verification cycle, can consume vast amounts of time and resources while still failing to reach parts of the design.

Most designs have dead code, unreachable blocks, and redundant logic. This is especially true for IP or reused blocks, which often have extra functionality unnecessary for the current design. If implicit white-box coverage metrics such as statement and condition coverage are part of the closure criteria, unused functionality will have negative impact on the coverage number. Hence, formal verification can be used to identify unreachable code early in the verification cycle and subsequently to eliminate these targets from the coverage model. As a result, the coverage measurement will more accurately reflect the quality of the stimuli from the constrained random tests.

For each implicit white-box coverage metric [4], there are scenarios where coverage is not possible:

Statement coverage

Unreachable statements

Branch coverage

Unreachable branches, duplicated branches, and unreachable default branches

Finite State Machine coverage

Unreachable states and unreachable transitions

Condition coverage

Unused logic, un-driven values, implicit constant values

Expression coverage

Unused logic, un-driven values, implicit constant values

Toggle coverage

Stuck-at values and unreachable toggles

Figure 1: White-box coverage metric and associated unreachable code

 

The secret to automatic formal check’s ability to find unreachable code is a focus on analyzing the design’s controllability and observability[3].

There are two types of unreachable dead code. One type is based on the semantic of the language, and the other is based on the functionality of the design. A lint tool is good at detecting semantic dead code. For instance, in the example below, the default dont_care assignment to the accelerate signal is unreachable based on synthesis semantics. It will be reported during the linting process. On the other hand, if the brake_pedal and the gas_pedal signals are mutually exclusive, the error assignment to the accelerate signal will be functionally unreachable too. This will not be detected by lint, but will be identified by automatic formal check. By analyzing the flattened netlist representation of the design, the tool can derive these signal relationships automatically.

begin

case ({brake_pedal, gas_pedal})

   2’b00: accelerate = no;

   2’b01: accelerate = yes;

   2’b10: accelerate = no;

   2’b11: accelerate = error;

   default: accelerate = dont_care;

endcase

end 

Figure 2: Default dont_care assignment unreachable based on synthesis semantics

 

You might ask here, “Isn’t this a common situation any time design teams integrate multiple IP blocks together? Doesn’t everyone know that IP needs to account for all the possible scenarios where it will be used?” The answer to both is, “Yes,” but those two questions aren’t the only ones that need to be asked.

The inputs to the IP are usually constrained to a subset of scenarios by the previous modules. As a result, there is redundant and unreachable logic in the IP. How can you deal with this logic? The best way, I think, is by specifying assumptions and constraints in terms of signal relationships at the input ports of a design. By leveraging this functional information, any automatic formal check tool worth its salt can identify unused logic, unreachable code, implicit constants, and stuck-at values in the design.

X-Semantic Check

Dynamic simulation also falls short in the areas of design initialization and X-semantics, generally summarized as X-semantic check. Sampling an unknown or uninitialized state (an X) necessarily produces an unpredictable value. If a design cannot be initialized reliably in silicon, it will not function correctly. If X’es were used in the RTL code, you need to monitor their entire lifecycles, from creation to propagation to usage.

Dynamic simulation does not interpret X’es as accurately as does silicon, which has only 1s and 0s. The goal is to eliminate pessimistic X-propagation as seen in simulation and to make sure an unknown or X-state is not generated or consumed unintentionally in the design[5]. Formal technology, with its ability to evaluate all 0/1 combinations of X’es in the design, is an antidote. Its thorough approach ensures that the design functionality will not be corrupted by X’es. Incorporating these rules internally, automatic formal check looks for issues related to register initialization, X-assignments, X-propagation, and X-termination.

 

always_ff @(posedge clk or posedge rst)

if (rst) gas_pedal <= 1’b0;

else     gas_pedal <= gas_pedal_status;

 

always_ff @(posedge clk)

case ({brake_pedal, gas_pedal})

   2’b00: accelerate = 1’b0;

   2’b01: accelerate = 1’b1;

   2’b10: accelerate = 1’b0;

   2’b11: accelerate = 1’bx;

   default: accelerate = 1’bx;

endcase

 

always_ff @(posedge clk or posedge rst)

if (rst) speedup_engine <= 1’b0;

else if (in_gear) speedup_engine <= accelerate;

 

Figure 3: Automatic formal is a good bet for dealing with the semantically unreachable
default branch of the case statement shown here

 

Let’s consider these potential X-semantic issues with the example in figure 3. Connecting a global reset to all the registers is ideal. However, due to power, area and routing constraints, this may not be always possible. In the example, the register gas_pedal is reset explicitly. On the other hand, the register accelerate is not reset. In simulation, if the signal brake_pedal is X, the case statement will pick the default branch and the accelerate register will become X pessimistically. However, in reality, if the register, gas_pedal is reset, the accelerate register will also be 0 one cycle later. It does not need to be reset explicitly; it is reset implicitly by its fan-in cone of logic.

Automatic formal check can be used to identify registers that will be reset implicitly and the ones that won’t. Of course, the registers that are not reset (explicitly or implicitly) must be initialized before they are used. Here, simulation can again miss something important. Depending on the tool, some simulators may pick up the default value of the data type. In that case, the problem may not be uncovered until it is too late.

There are two X-assignments in figure 3. As mentioned, the default branch of the case statement is semantically unreachable. Hence, it is not a problem. Automatic formal check will report the other X-assignment if the selection signals, brake_pedal and gas_pedal are not mutually exclusive.

By identifying all the reachable X-assignments, users can prevent X’es from being generated unintentionally. On the other hand, an X-assignment may not be a problem if the X value is never used. Distinguishing between used and unused X-assignments is critical, especially when there are many such assignments in the design. This means following the propagation and usage of those X’es.

X-propagation examines the fan-out cone of the X-assignment until it terminates in one or more storage elements. It is difficult to know whether an X value will be used eventually or not. In figure 3, if the signals, brake_pedal and gas_pedal are not mutually exclusive, the accelerate result may be assigned X. However, if the guarding signal, in_gear is not true, the X value will not get into the output register, speedup_engine. Hence, X-termination is being handled and the downstream logic is being protected from X contamination.

Summary

Considered together, the various static verification techniques form a powerful tool for improving verification flow. Yet such techniques are merely complements, not complete solutions. At DesignCon in 2007, my colleague Harry Foster and I argued that there is no silver bullet in verification. The same is true today even after the arrival of various automatic formal check tools, including the one offered by Mentor Graphics.

Still, static verification deserves a closer look, especially since those who practice it seem to comprise one of the tiniest tribes in the world of chip design. The bottom line: static verification has been used strategically by designers to improve design quality and to complement dynamic verification on coverage closure. Besides helping identify dead code and X-semantic issues, as explained in this article, static verification also accelerates the discovery and diagnosis of design flaws during functional verification, reduces the time required to verify a design, and simplifies the overall development cycle for complex SoC designs. I’d bet that that haggard looking workhorse, dynamic simulation, will be happy to have the help.   

References:

[1] Douglas Perry, Harry Foster, “Applied Formal Verification”, McGraw-Hill.

[2] Ping Yeung, Vijay Gupta, “Five Hot Spots for Assertion-Based Verification”, DVCon

2005.

[3] Harry Foster, Ping Yeung, “Planning Formal Verification Closure”, DesignCon 2007.

[4] Questa SV User Manual, Version 6.6, Mentor Graphics

[5] Mike Turpin, “The Dangers of Living with an X,” ARM.

[6] 0-In Formal User Guide, Version 3.0, Mentor Graphics

 

About the Author:

Ping Yeung is a Mentor Graphics’ principal engineer. Yeung was part of the 0-In team that developed and introduced assertion-based verification (ABV) to the industry. He has over 16 years’ in application development, marketing, and product development experience in the EDA industry, including positions at 0-In, Synopsys, and Mentor Graphics. He holds 5 patents in CDC and formal verification and a Ph.D. in computer engineering from the University of Edinburgh.

Leave a Reply

featured blogs
Mar 29, 2024
By Mark Williams, Sr Software Engineering Group Director Translator: Masaru Yasukawa 差動アンプはã1つの入力信号ではなく2つの入力信号間の差にゲインをé...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Achieving High Power Density with IGBT and SiC Power Modules
Sponsored by Mouser Electronics and Infineon
Recent trends in the inverter market have made high power density, scalability, and ease of assembly more important than ever before. In this episode of Chalk Talk, Amelia Dalton and Abraham Markose from Infineon examine how Easy & Econo power modules from Infineon can help solve common inverter design requirements. They explore the benefits and construction of these modules and how you can take advantage of them in your next design.
May 19, 2023
34,600 views