feature article
Subscribe Now

Are We Done Yet?

OneSpin Takes on Formal Verification Metrics

“Is this it?”

“Yeah, I think so. Drop it off here.”

“Whaddaya mean, ‘I think so’? If they don’t get the money, we’re dead.”

“I know, I know. But you can’t see any addresses here. It’s all so beat up I don’t really know which house is which.”

“And you had to go without a map. Real smart.”

“There is no map, you moron! Not even Google will drive through this neighborhood. This is, like, No Man’s Land. I just want to get in, make the drop, and get out. Look, there, those guys are watching us. Givin’ me the creeps.”

“OK, well, drive around one more time to make sure. I don’t want to leave this in the wrong place. I like my life too much. I don’t actually know why… I just do.”

“All right, all right. But we’ve only got five minutes. They said drop it off by noon or don’t bother dropping it off.”

“OK, keep your head on. Jeez! We’ll drive around one more time and then make our best call. And then pray. It’s all we can do.”

It’s all well and good to verify a design, but how do you know when you’re done? Simulation has coverage metrics, so, whether or not you agree with their value, by that definition, you can know when you’re done (or as close as you’re going to get).

It’s not so easy with formal analysis. According to OneSpin’s Peter Feist, there are many aspects of formal for which no metrics exist. So you really don’t know if you’ve completed the job and everything is fine. You do what seems to make sense, and then the deadline hits and, absent a compelling reason to push out delivery, you declare yourself done. (And maybe pray.) It’s the best you can do.

This problem feeds into a bigger issue being addressed by Accellera with their Unified Coverage Interoperability Standard (UCIS) efforts. While the title may focus on data that can be moved between tools, it also highlights verification areas for which there are no metrics to move around.

OneSpin is addressing this with their new Quantify MDV product (MDV standing for “Metrics-Driven Verification”). With this tool, they divide the totality of a design into six categories, and, conceptually, they color these categories either as some shade of green or some shade of red. Yeah, as you might guess, green is good and red is bad. The overall metric is the ratio of red to green. But that over-simplifies things.

There are two very simple categories. There’s a green one for code that has been verified by assertions. And there’s a red one for code that hasn’t been verified by any assertions. The solution to the latter is obvious: add more assertions.

Then there are four other categories that are flagged for designer review. They may be OK or they may not be. By default, three of them are red and one is green.

The first of these is a red one: it indicates parts of the design that are excluded from verification because of constraints. The good news about formal analysis is that it can cover the entire design space. The bad news is that the space is huge, and there is a “relevant” subset of that space that merits your attention. If you can focus analysis on that sub-space, then your verification will complete much more quickly.

You identify this sub-space by applying constraints. The problem is, it’s not like you can draw the whole space and then draw the sub-space and be done with it. The constraints you use are an attempt to rein in the verification heuristically, but they don’t explicitly map out the sub-space you’re trying to achieve.

It’s like trying to isolate a small circle inside a big one using only rectangles to mask off the unwanted areas. It’s hard to position them so they don’t cover any of the desired circle, while still leaving as much of the undesired area covered as possible. That means that you can under-constrain (whose only drawback is that verification will take longer than needed) or you can over-constrain – whose drawback is that you eliminate legitimate parts of the design from verification.

To check this, Quantify MDV will identify pieces of the design that will not be verified due to constraints. You can then decide whether or not this is acceptable.

The next category consists of statements in the code that are completely unreachable. An example of this might be actions to be taken in a state that will never be reached (barring some sort of upset that puts the entire circuit in an unknown condition). This raises the question, of course, as to why the logic is there in the first place. Either it’s a mistake or it should be removed.

The next category consists of logic in the design that serves no discernable purpose. If some value or state is set and then never used, there’s no real reason to verify whether or not it works. Such logic would appear to be redundant. Again, this would generally be a red flag: either something’s wrong or the logic should be removed.

Finally, there may be logic explicitly introduced to enable verification. As such, it plays no part in the normal functioning of the design, and so its verification would not be included in the plan. Even though it’s considered benign, and is hence green, it’s also submitted for designer review.

It bears noting that identifying these portions of the code is different from actually verifying them. OneSpin has a “quick” mode that lets you simply establish your coverage. This will go much faster than doing the full verification, and it may allow you to address problem areas before running the whole thing. Along with this mode, there’s another classification, “open,” indicating that an assertion covers some piece of the design, but it hasn’t been verified yet.

Mr. Feist believes that they are the first to be establishing something more solid for assessing formal verification progress. As such, it’s proprietary – the kind of thing that the UCIS project is trying to rationalize. But, assuming he’s right, someone has to be first, and you have to start somewhere, so he feels that this can help to provide some direction for this portion of the UCIS problem.

But, for the time being, this is intended to take their earlier gap identification work and move towards a more formal set of formal coverage metrics. So that you can know where you are, and praying plays less of a role.

 

More info:

OneSpin

Leave a Reply

featured blogs
Dec 4, 2020
As consumers, wireless technology is often taken for granted. How difficult would everyday life be without it? Can I open my garage door today? How do I turn on my Smart TV? Where are my social networks? Most of our daily wireless connections – from Wi-Fi and Bluetooth ...
Dec 4, 2020
I hear Percepio will be introducing the latest version of their Tracealyzer and their new DevAlert IoT device monitoring and remote diagnostics solution....
Dec 4, 2020
[From the last episode: We looked at an IoT example involving fleets of semi-trailers.] We'€™re now going to look at energy and how electronics fit into the overall global energy story. Whether it'€™s about saving money on electricity at home, making data centers more eff...
Dec 4, 2020
A few weeks ago, there was a webinar about designing 3D-ICs with Innovus Implementation. Although it was not the topic of the webinar, I should point out that if your die is more custom/analog, then... [[ Click on the title to access the full blog on the Cadence Community si...

featured video

Improve SoC-Level Verification Efficiency by Up to 10X

Sponsored by Cadence Design Systems

Chip-level testbench creation, multi-IP and CPU traffic generation, performance bottleneck identification, and data and cache-coherency verification all lack automation. The effort required to complete these tasks is error prone and time consuming. Discover how the Cadence® System VIP tool suite works seamlessly with its simulation, emulation, and prototyping engines to automate chip-level verification and improve efficiency by ten times over existing manual processes.

Click here for more information about System VIP

featured paper

Reducing Radiated EMI

Sponsored by Maxim Integrated

This application note explains how to reduce the radiated EMI emission in the MAX38643 nanopower buck converter. It also explains the sources of EMI noise, and provides a few simple methods to reduce the radiated EMI and make the MAX38643 buck converter compliant to the CISPR32 standard Class B limit.

Click here to download the whitepaper

Featured Chalk Talk

Magnetics for High Voltage

Sponsored by Mouser Electronics and Bourns

With today’s trend toward ever-increasing voltages in energy systems, choosing the right transformer for the job has become an engineering challenge. High voltages demand careful attention to insulation, clearance, and creepage. In this episode of Chalk Talk, Amelia Dalton chats with Cathal Sheehan of Bourns about choosing magnetics for high-voltage applications.

More information about Bourns Magnetics for High Voltage Applications