feature article
Subscribe Now

Are We Done Yet?

OneSpin Takes on Formal Verification Metrics

“Is this it?”

“Yeah, I think so. Drop it off here.”

“Whaddaya mean, ‘I think so’? If they don’t get the money, we’re dead.”

“I know, I know. But you can’t see any addresses here. It’s all so beat up I don’t really know which house is which.”

“And you had to go without a map. Real smart.”

“There is no map, you moron! Not even Google will drive through this neighborhood. This is, like, No Man’s Land. I just want to get in, make the drop, and get out. Look, there, those guys are watching us. Givin’ me the creeps.”

“OK, well, drive around one more time to make sure. I don’t want to leave this in the wrong place. I like my life too much. I don’t actually know why… I just do.”

“All right, all right. But we’ve only got five minutes. They said drop it off by noon or don’t bother dropping it off.”

“OK, keep your head on. Jeez! We’ll drive around one more time and then make our best call. And then pray. It’s all we can do.”

It’s all well and good to verify a design, but how do you know when you’re done? Simulation has coverage metrics, so, whether or not you agree with their value, by that definition, you can know when you’re done (or as close as you’re going to get).

It’s not so easy with formal analysis. According to OneSpin’s Peter Feist, there are many aspects of formal for which no metrics exist. So you really don’t know if you’ve completed the job and everything is fine. You do what seems to make sense, and then the deadline hits and, absent a compelling reason to push out delivery, you declare yourself done. (And maybe pray.) It’s the best you can do.

This problem feeds into a bigger issue being addressed by Accellera with their Unified Coverage Interoperability Standard (UCIS) efforts. While the title may focus on data that can be moved between tools, it also highlights verification areas for which there are no metrics to move around.

OneSpin is addressing this with their new Quantify MDV product (MDV standing for “Metrics-Driven Verification”). With this tool, they divide the totality of a design into six categories, and, conceptually, they color these categories either as some shade of green or some shade of red. Yeah, as you might guess, green is good and red is bad. The overall metric is the ratio of red to green. But that over-simplifies things.

There are two very simple categories. There’s a green one for code that has been verified by assertions. And there’s a red one for code that hasn’t been verified by any assertions. The solution to the latter is obvious: add more assertions.

Then there are four other categories that are flagged for designer review. They may be OK or they may not be. By default, three of them are red and one is green.

The first of these is a red one: it indicates parts of the design that are excluded from verification because of constraints. The good news about formal analysis is that it can cover the entire design space. The bad news is that the space is huge, and there is a “relevant” subset of that space that merits your attention. If you can focus analysis on that sub-space, then your verification will complete much more quickly.

You identify this sub-space by applying constraints. The problem is, it’s not like you can draw the whole space and then draw the sub-space and be done with it. The constraints you use are an attempt to rein in the verification heuristically, but they don’t explicitly map out the sub-space you’re trying to achieve.

It’s like trying to isolate a small circle inside a big one using only rectangles to mask off the unwanted areas. It’s hard to position them so they don’t cover any of the desired circle, while still leaving as much of the undesired area covered as possible. That means that you can under-constrain (whose only drawback is that verification will take longer than needed) or you can over-constrain – whose drawback is that you eliminate legitimate parts of the design from verification.

To check this, Quantify MDV will identify pieces of the design that will not be verified due to constraints. You can then decide whether or not this is acceptable.

The next category consists of statements in the code that are completely unreachable. An example of this might be actions to be taken in a state that will never be reached (barring some sort of upset that puts the entire circuit in an unknown condition). This raises the question, of course, as to why the logic is there in the first place. Either it’s a mistake or it should be removed.

The next category consists of logic in the design that serves no discernable purpose. If some value or state is set and then never used, there’s no real reason to verify whether or not it works. Such logic would appear to be redundant. Again, this would generally be a red flag: either something’s wrong or the logic should be removed.

Finally, there may be logic explicitly introduced to enable verification. As such, it plays no part in the normal functioning of the design, and so its verification would not be included in the plan. Even though it’s considered benign, and is hence green, it’s also submitted for designer review.

It bears noting that identifying these portions of the code is different from actually verifying them. OneSpin has a “quick” mode that lets you simply establish your coverage. This will go much faster than doing the full verification, and it may allow you to address problem areas before running the whole thing. Along with this mode, there’s another classification, “open,” indicating that an assertion covers some piece of the design, but it hasn’t been verified yet.

Mr. Feist believes that they are the first to be establishing something more solid for assessing formal verification progress. As such, it’s proprietary – the kind of thing that the UCIS project is trying to rationalize. But, assuming he’s right, someone has to be first, and you have to start somewhere, so he feels that this can help to provide some direction for this portion of the UCIS problem.

But, for the time being, this is intended to take their earlier gap identification work and move towards a more formal set of formal coverage metrics. So that you can know where you are, and praying plays less of a role.

 

More info:

OneSpin

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Reliability: Basics & Grades
Reliability is cornerstone to all electronic designs today, but how reliability is implemented and determined can vary widely by different market segments. In this episode of Chalk Talk, Amelia Dalton and Sam Accardo from the YAGEO Group explore the definition of reliability for electronic components, investigate the different grades of reliability offered by the YAGEO Group and the various steps that the YAGEO Group is taking to ensure the greatest reliability of their components.
Aug 15, 2024
53,474 views