feature article
Subscribe Now

Combining Formal Verification With Simulation

You Can Have Your Cake and Eat it Too!

It is well known that formal verification offers exhaustive proofs of correctness and can help find deep corner case bugs. Verification engineers have typically relegated its use to block-level designs, preferring to use simulation, the tried and tested workhorse, for chip-level and full-chip verification. The very strength of formal verification, namely its exhaustiveness, is also its Achilles heel in that with design size growth, formal engines run out of steam due to state space explosion. Simulation, on the other hand, scales well with design size and doesn’t suffer the same fate as formal verification.

Why Simulation Alone Isn’t Enough

With ever-increasing design complexity and the desire to verify a design as exhaustively as possible, chip designers are increasingly interested in adopting formal verification methodologies. Despite being ubiquitous, simulation doesn’t always provide the measure of confidence provided by mathematical proofs delivered by formal tools. The incremental effort required to achieve each percentage point of coverage beyond the first 90 percent increases exponentially with simulation, extending the verification cycle and becoming the long pole in the tent to getting the product out on time. Formal verification comes with the promise of delivering what the verification engineers need – exhaustive coverage for those hard to reach points of the design. The question to ask then is “what is the catch – why not just combine the two technologies to get the best of both worlds?”

Barriers to Formal Verification Adoption

In a predominantly simulation-friendly world, expertise handling formal verification tools is in relatively short supply. Formal tools have also had the reputation of belonging to the verification “nerd” domain. EDA vendors have been working hard to dispel this myth. Then there is the real problem: a user cannot give up simulation to adopt formal – it is not a replacement technology. It falls in the realm of complementary technology, with the ground rule that the user is not willing to give up simulation in exchange for using formal. Also, it shouldn’t take an equal amount of time to set up a formal environment as it does a simulation environment. Reuse of applicable parts of the simulation infrastructure is an all-important bottleneck to formal adoption. Another equally important but completely different barrier to formal adoption is the difficulty of sharing coverage information between formal and simulation. While both methodologies report coverage they frequently don’t follow the same semantics of reporting, making the process of correlating coverage a manual and tedious task.

Hybrid Approach

A hybrid solution offers the best of both worlds by combining the strengths of formal methodologies and simulation to give a powerful one-two verification punch. Formal tools may suffer from non-convergence and state space explosion, while simulation tools require extensive testbench creation to stimulate bugs deep in the design state space to catch the bug, making it a hit or miss proposition. The combination of the two overcomes the limitations of each individual technology. A hybrid approach that uses testbench stimuli to seed the formal searches will suffer from the same drawback as a pure simulation approach – it is still a hit or miss technology because the formal searches are governed by the quality of the stimuli.

Alternatively, a formal-assisted simulation technique can be immensely powerful. In contrast to the approach just mentioned, there is no dependence on testbench stimuli. The idea is to start a formal search and exhaustively search a limited state space. Before running into state space explosion issues, an intelligent switch to deep simulation using the formal engine as a guide can form the basis of a very powerful solution. Once the simulation has been allowed to run for a while and explore parts of the design state space that would have consumed the capacity of the formal engine, such an approach will switch to another localized formal search. By alternately using simulation to move deep into the design state space and deploying exhaustive proofs or searches for bugs with a formal engine, the hybrid solution holds the promise of delivering much better results than a formal or simulation engine alone can ever hope to achieve. With this approach, it must be realized that the quality of the results will depend to a large extent on the heuristics used to switch between simulation and formal and vice versa. Important decisions to be made are when to switch between the two and how long to let each engine run before switching.

A measure of the effectiveness of this approach can be gleaned from the following example. Consider a block of a graphics processing engine with about 300,000 gates and 15,000 sequential elements in the design. About 4,000 coverage goals were specified for this design. After a three-day run time, 58-percent coverage was achieved using a pure constrained random simulation approach. The hybrid-formal simulation approach increased coverage to 92 percent for the same design in the same amount of time. This represents a 59 percent increase in coverage as compared to the simulation only approach. In some cases, coverage improvements as great as 100 percent have been achieved using a hybrid-formal simulation approach.

Challenges in Combining Formal Methodologies With Simulation

The hybrid approach is not without challenges. As with a purely formal approach, convergence can be an issue. Heuristics must be constantly monitored and improved to get the best of both worlds. A second challenge is the ability to integrate the simulation infrastructure with that of the formal engine. Any applicable part of a testbench that can be must be reused. Leveraging the existing testbench infrastructure is a guaranteed way to overcome objections from the hardcore simulation experts who don’t want to relinquish any of the benefits of their tried and tested approach. Finally, an integrated view of coverage information is required to assess the effectiveness of the verification runs. Monitoring this information is essential to gauge if the desired coverage targets have been achieved.

Conclusion

A hybrid approach combining the best of formal and simulation technologies leads to more complete verification. If architected properly, such a solution can lead to broader adoption of formal technologies even with die-hard simulation engineers. The hybrid solution can extend the range of a formal tool and increase the chances of finding bugs deep in the design. Leveraging components of the existing testbench infrastructure and merging coverage information with a simulation environment can help usher in the benefits of formal technology in a predominantly simulation-friendly environment. With the hybrid approach, it is indeed possible to have the simulation cake and eat the formal cake too!

About the author: Krishna Balachandran is currently director of product marketing for low power verification products at Synopsys. He holds a master’s degree in computer engineering from the University of Louisiana and completed the executive MBA certification program from Stanford University.

 

Leave a Reply

featured blogs
Nov 20, 2020
Autumn is a tough time for us Brits.  From the beginning of September when the kids go back to school until Christmas Eve, we have little to get excited about besides the nights closing in and the weather getting worse.  For our American cousins, Thanksgiving is a r...
Nov 20, 2020
[From the last episode: We looked at neuromorphic machine learning, which is intended to act more like the brain does.] Our last topic to cover on learning (ML) is about training. We talked about supervised learning, which means we'€™re training a model based on a bunch of ...
Nov 20, 2020
Are you a lab instructor sitting at home right now? Have you completed some Cadence Online Training courses for your education and earned Digital Badges for personal promotion and spicing up your CV... [[ Click on the title to access the full blog on the Cadence Community si...
Nov 19, 2020
How would one set about measuring the width of a human hair using a laser? Why, with Omni'€™s Hair Diffraction Calculator, of course!...

featured video

AI SoC Chats: Scaling AI Systems with Die-to-Die Interfaces

Sponsored by Synopsys

Join Synopsys Interface IP expert Manmeet Walia to understand the trends around scaling AI SoCs and systems while minimizing latency and power by using die-to-die interfaces.

Click here for more information about DesignWare IP for Amazing AI

featured paper

Learn how designing small is easier than you think

Sponsored by Texas Instruments

Designing with small-package ICs is easier than you think. Find out how our collection of industry's smallest signal-chain products can help you optimize board space without sacrificing features, cost, simplicity, or reliability in your system.

Click here to download the whitepaper

Featured Chalk Talk

Single Pair Ethernet

Sponsored by Mouser Electronics and Harting

Industry 4.0 brings serious demands on communication connections. Designers need to consider interoperability, processing, analytics, EMI reduction, field rates, communication protocols and much more. In this episode of Chalk Talk, Amelia Dalton chats with Piotr Polak and McKenzie Reed of Harting about using single-pair Ethernet for Industry 4.0.

Click here for more information about HARTING T1 Industrial Single Pair Ethernet (SPE) Products