feature article
Subscribe Now

Moving Beyond Simulation

Traditionally, simulation-based dynamic verification techniques — think directed tests, constrained-random simulation and the like — have collectively been the workhorse of functional verification. But it’s fair to say that this horse is straining under the load and in need of some relief. The source of this strain: increasing SoC integration and overall complexity. This, in turn, drives up verification costs. Software licenses, hardware servers and staff to set up and run regression environments all cost real money, which of course remains scarce given the tepid economy. Even with unlimited resources, due to that increasing design complexity, more and more code remains effectively beyond the reach of simulation. 

Luckily relief is available in the form of various static verification techniques. One such technique is formal verification, a systematic process of ensuring, through exhaustive algorithmic techniques, that a design implementation satisfies initial requirements and specs [1]. Instead of stimulating the design to observe its behavior, formal verification relies on heavy doses of math to analyze all possible executions. Formal verification often works well with assertion-based verification [2]. Once a specific behavior is captured with a property language such as PSL or SVA, formal verification can be used to crawl even into the darkest nooks and crannies of the code.  

The other useful technique is automatic formal check, which is especially useful for legacy designs that do not have assertions. As its name implies, automatic formal hands the task of looking for design issues over to a bot of sorts. Automation, available in a handful of tools from leading EDA vendors, makes formal verification accessible to designers who are not yet ready to write properties. Here’s how it works, in a nutshell: leveraging a set of pre-defined assertion rules, automatic formal check analyzes the RTL structure of the design and characterizes its internal states. Then it identifies and checks for those undesired behaviors in the design.

Any Google search reveals loads of resources on formal verification. So in this article, I am going to describe two areas where automatic formal check, a relative newcomer, can be used to supplement dynamic simulation: Dead-Code Identification (DCI) and X-Semantic Check (XSC). My hope is that this information enables designers to find better ways to rewrite the RTL code early in the design phase of the project and allows verification engineers to identify those adverse effects in the regression.

Dead Code Identification

Those of us who work on static verification are occasionally accused of monomania from our colleagues for our single-minded pursuit of a technique many consider to be difficult and obscure. So, before going further, let me state unequivocally: dynamic simulation is essential for verifying design functionality. Indeed, simulation makes possible an understanding of coverage, undoubtedly the single most important measure of verification completeness.

The coverage space for today’s designs is a multi-dimensional, orthogonal set of metrics [3]. It can be a white-box metric measuring the coverage inside the design or it can be a black-box metric measuring the end-to-end behavior regardless how the design is represented. For instance, statement and condition coverage are examples of an implicit white-box metric that can automatically be derived from the RTL model. Scoreboard is an example of a higher-level explicit black-box metric that ignores the implementation detail of the design; it can be used even when the design is represented at different levels of abstraction.

Today, when constrained random simulation fails to achieve the targeted coverage goal, engineers have to fine tune the environment or add new tests. These efforts, often attempted relatively late in the verification cycle, can consume vast amounts of time and resources while still failing to reach parts of the design.

Most designs have dead code, unreachable blocks, and redundant logic. This is especially true for IP or reused blocks, which often have extra functionality unnecessary for the current design. If implicit white-box coverage metrics such as statement and condition coverage are part of the closure criteria, unused functionality will have negative impact on the coverage number. Hence, formal verification can be used to identify unreachable code early in the verification cycle and subsequently to eliminate these targets from the coverage model. As a result, the coverage measurement will more accurately reflect the quality of the stimuli from the constrained random tests.

For each implicit white-box coverage metric [4], there are scenarios where coverage is not possible:

Statement coverage

Unreachable statements

Branch coverage

Unreachable branches, duplicated branches, and unreachable default branches

Finite State Machine coverage

Unreachable states and unreachable transitions

Condition coverage

Unused logic, un-driven values, implicit constant values

Expression coverage

Unused logic, un-driven values, implicit constant values

Toggle coverage

Stuck-at values and unreachable toggles

Figure 1: White-box coverage metric and associated unreachable code

 

The secret to automatic formal check’s ability to find unreachable code is a focus on analyzing the design’s controllability and observability[3].

There are two types of unreachable dead code. One type is based on the semantic of the language, and the other is based on the functionality of the design. A lint tool is good at detecting semantic dead code. For instance, in the example below, the default dont_care assignment to the accelerate signal is unreachable based on synthesis semantics. It will be reported during the linting process. On the other hand, if the brake_pedal and the gas_pedal signals are mutually exclusive, the error assignment to the accelerate signal will be functionally unreachable too. This will not be detected by lint, but will be identified by automatic formal check. By analyzing the flattened netlist representation of the design, the tool can derive these signal relationships automatically.

begin

case ({brake_pedal, gas_pedal})

   2’b00: accelerate = no;

   2’b01: accelerate = yes;

   2’b10: accelerate = no;

   2’b11: accelerate = error;

   default: accelerate = dont_care;

endcase

end 

Figure 2: Default dont_care assignment unreachable based on synthesis semantics

 

You might ask here, “Isn’t this a common situation any time design teams integrate multiple IP blocks together? Doesn’t everyone know that IP needs to account for all the possible scenarios where it will be used?” The answer to both is, “Yes,” but those two questions aren’t the only ones that need to be asked.

The inputs to the IP are usually constrained to a subset of scenarios by the previous modules. As a result, there is redundant and unreachable logic in the IP. How can you deal with this logic? The best way, I think, is by specifying assumptions and constraints in terms of signal relationships at the input ports of a design. By leveraging this functional information, any automatic formal check tool worth its salt can identify unused logic, unreachable code, implicit constants, and stuck-at values in the design.

X-Semantic Check

Dynamic simulation also falls short in the areas of design initialization and X-semantics, generally summarized as X-semantic check. Sampling an unknown or uninitialized state (an X) necessarily produces an unpredictable value. If a design cannot be initialized reliably in silicon, it will not function correctly. If X’es were used in the RTL code, you need to monitor their entire lifecycles, from creation to propagation to usage.

Dynamic simulation does not interpret X’es as accurately as does silicon, which has only 1s and 0s. The goal is to eliminate pessimistic X-propagation as seen in simulation and to make sure an unknown or X-state is not generated or consumed unintentionally in the design[5]. Formal technology, with its ability to evaluate all 0/1 combinations of X’es in the design, is an antidote. Its thorough approach ensures that the design functionality will not be corrupted by X’es. Incorporating these rules internally, automatic formal check looks for issues related to register initialization, X-assignments, X-propagation, and X-termination.

 

always_ff @(posedge clk or posedge rst)

if (rst) gas_pedal <= 1’b0;

else     gas_pedal <= gas_pedal_status;

 

always_ff @(posedge clk)

case ({brake_pedal, gas_pedal})

   2’b00: accelerate = 1’b0;

   2’b01: accelerate = 1’b1;

   2’b10: accelerate = 1’b0;

   2’b11: accelerate = 1’bx;

   default: accelerate = 1’bx;

endcase

 

always_ff @(posedge clk or posedge rst)

if (rst) speedup_engine <= 1’b0;

else if (in_gear) speedup_engine <= accelerate;

 

Figure 3: Automatic formal is a good bet for dealing with the semantically unreachable
default branch of the case statement shown here

 

Let’s consider these potential X-semantic issues with the example in figure 3. Connecting a global reset to all the registers is ideal. However, due to power, area and routing constraints, this may not be always possible. In the example, the register gas_pedal is reset explicitly. On the other hand, the register accelerate is not reset. In simulation, if the signal brake_pedal is X, the case statement will pick the default branch and the accelerate register will become X pessimistically. However, in reality, if the register, gas_pedal is reset, the accelerate register will also be 0 one cycle later. It does not need to be reset explicitly; it is reset implicitly by its fan-in cone of logic.

Automatic formal check can be used to identify registers that will be reset implicitly and the ones that won’t. Of course, the registers that are not reset (explicitly or implicitly) must be initialized before they are used. Here, simulation can again miss something important. Depending on the tool, some simulators may pick up the default value of the data type. In that case, the problem may not be uncovered until it is too late.

There are two X-assignments in figure 3. As mentioned, the default branch of the case statement is semantically unreachable. Hence, it is not a problem. Automatic formal check will report the other X-assignment if the selection signals, brake_pedal and gas_pedal are not mutually exclusive.

By identifying all the reachable X-assignments, users can prevent X’es from being generated unintentionally. On the other hand, an X-assignment may not be a problem if the X value is never used. Distinguishing between used and unused X-assignments is critical, especially when there are many such assignments in the design. This means following the propagation and usage of those X’es.

X-propagation examines the fan-out cone of the X-assignment until it terminates in one or more storage elements. It is difficult to know whether an X value will be used eventually or not. In figure 3, if the signals, brake_pedal and gas_pedal are not mutually exclusive, the accelerate result may be assigned X. However, if the guarding signal, in_gear is not true, the X value will not get into the output register, speedup_engine. Hence, X-termination is being handled and the downstream logic is being protected from X contamination.

Summary

Considered together, the various static verification techniques form a powerful tool for improving verification flow. Yet such techniques are merely complements, not complete solutions. At DesignCon in 2007, my colleague Harry Foster and I argued that there is no silver bullet in verification. The same is true today even after the arrival of various automatic formal check tools, including the one offered by Mentor Graphics.

Still, static verification deserves a closer look, especially since those who practice it seem to comprise one of the tiniest tribes in the world of chip design. The bottom line: static verification has been used strategically by designers to improve design quality and to complement dynamic verification on coverage closure. Besides helping identify dead code and X-semantic issues, as explained in this article, static verification also accelerates the discovery and diagnosis of design flaws during functional verification, reduces the time required to verify a design, and simplifies the overall development cycle for complex SoC designs. I’d bet that that haggard looking workhorse, dynamic simulation, will be happy to have the help.   

References:

[1] Douglas Perry, Harry Foster, “Applied Formal Verification”, McGraw-Hill.

[2] Ping Yeung, Vijay Gupta, “Five Hot Spots for Assertion-Based Verification”, DVCon

2005.

[3] Harry Foster, Ping Yeung, “Planning Formal Verification Closure”, DesignCon 2007.

[4] Questa SV User Manual, Version 6.6, Mentor Graphics

[5] Mike Turpin, “The Dangers of Living with an X,” ARM.

[6] 0-In Formal User Guide, Version 3.0, Mentor Graphics

 

About the Author:

Ping Yeung is a Mentor Graphics’ principal engineer. Yeung was part of the 0-In team that developed and introduced assertion-based verification (ABV) to the industry. He has over 16 years’ in application development, marketing, and product development experience in the EDA industry, including positions at 0-In, Synopsys, and Mentor Graphics. He holds 5 patents in CDC and formal verification and a Ph.D. in computer engineering from the University of Edinburgh.

Leave a Reply

featured blogs
Oct 3, 2024
Someone with too much time on his hands managed to get Linux to boot on an Intel 4004 in only 4.76 days...

featured paper

A game-changer for IP designers: design-stage verification

Sponsored by Siemens Digital Industries Software

In this new technical paper, you’ll gain valuable insights into how, by moving physical verification earlier in the IP design flow, you can locate and correct design errors sooner, reducing costs and getting complex designs to market faster. Dive into the challenges of hard, soft and custom IP creation, and learn how to run targeted, real-time or on-demand physical verification with precision, earlier in the layout process.

Read more

featured chalk talk

Ultra-low Power Fuel Gauging for Rechargeable Embedded Devices
Fuel gauging is a critical component of today’s rechargeable embedded devices. In this episode of Chalk Talk, Amelia Dalton and Robin Saltnes of Nordic Semiconductor explore the variety of benefits that Nordic Semiconductor’s nPM1300 PMIC brings to rechargeable embedded devices, the details of the fuel gauge system at the heart of this solution, and the five easy steps that you can take to implement this solution into your next embedded design.
May 8, 2024
30,534 views