feature article
Subscribe Now

Are We Done Yet?

OneSpin Takes on Formal Verification Metrics

“Is this it?”

“Yeah, I think so. Drop it off here.”

“Whaddaya mean, ‘I think so’? If they don’t get the money, we’re dead.”

“I know, I know. But you can’t see any addresses here. It’s all so beat up I don’t really know which house is which.”

“And you had to go without a map. Real smart.”

“There is no map, you moron! Not even Google will drive through this neighborhood. This is, like, No Man’s Land. I just want to get in, make the drop, and get out. Look, there, those guys are watching us. Givin’ me the creeps.”

“OK, well, drive around one more time to make sure. I don’t want to leave this in the wrong place. I like my life too much. I don’t actually know why… I just do.”

“All right, all right. But we’ve only got five minutes. They said drop it off by noon or don’t bother dropping it off.”

“OK, keep your head on. Jeez! We’ll drive around one more time and then make our best call. And then pray. It’s all we can do.”

It’s all well and good to verify a design, but how do you know when you’re done? Simulation has coverage metrics, so, whether or not you agree with their value, by that definition, you can know when you’re done (or as close as you’re going to get).

It’s not so easy with formal analysis. According to OneSpin’s Peter Feist, there are many aspects of formal for which no metrics exist. So you really don’t know if you’ve completed the job and everything is fine. You do what seems to make sense, and then the deadline hits and, absent a compelling reason to push out delivery, you declare yourself done. (And maybe pray.) It’s the best you can do.

This problem feeds into a bigger issue being addressed by Accellera with their Unified Coverage Interoperability Standard (UCIS) efforts. While the title may focus on data that can be moved between tools, it also highlights verification areas for which there are no metrics to move around.

OneSpin is addressing this with their new Quantify MDV product (MDV standing for “Metrics-Driven Verification”). With this tool, they divide the totality of a design into six categories, and, conceptually, they color these categories either as some shade of green or some shade of red. Yeah, as you might guess, green is good and red is bad. The overall metric is the ratio of red to green. But that over-simplifies things.

There are two very simple categories. There’s a green one for code that has been verified by assertions. And there’s a red one for code that hasn’t been verified by any assertions. The solution to the latter is obvious: add more assertions.

Then there are four other categories that are flagged for designer review. They may be OK or they may not be. By default, three of them are red and one is green.

The first of these is a red one: it indicates parts of the design that are excluded from verification because of constraints. The good news about formal analysis is that it can cover the entire design space. The bad news is that the space is huge, and there is a “relevant” subset of that space that merits your attention. If you can focus analysis on that sub-space, then your verification will complete much more quickly.

You identify this sub-space by applying constraints. The problem is, it’s not like you can draw the whole space and then draw the sub-space and be done with it. The constraints you use are an attempt to rein in the verification heuristically, but they don’t explicitly map out the sub-space you’re trying to achieve.

It’s like trying to isolate a small circle inside a big one using only rectangles to mask off the unwanted areas. It’s hard to position them so they don’t cover any of the desired circle, while still leaving as much of the undesired area covered as possible. That means that you can under-constrain (whose only drawback is that verification will take longer than needed) or you can over-constrain – whose drawback is that you eliminate legitimate parts of the design from verification.

To check this, Quantify MDV will identify pieces of the design that will not be verified due to constraints. You can then decide whether or not this is acceptable.

The next category consists of statements in the code that are completely unreachable. An example of this might be actions to be taken in a state that will never be reached (barring some sort of upset that puts the entire circuit in an unknown condition). This raises the question, of course, as to why the logic is there in the first place. Either it’s a mistake or it should be removed.

The next category consists of logic in the design that serves no discernable purpose. If some value or state is set and then never used, there’s no real reason to verify whether or not it works. Such logic would appear to be redundant. Again, this would generally be a red flag: either something’s wrong or the logic should be removed.

Finally, there may be logic explicitly introduced to enable verification. As such, it plays no part in the normal functioning of the design, and so its verification would not be included in the plan. Even though it’s considered benign, and is hence green, it’s also submitted for designer review.

It bears noting that identifying these portions of the code is different from actually verifying them. OneSpin has a “quick” mode that lets you simply establish your coverage. This will go much faster than doing the full verification, and it may allow you to address problem areas before running the whole thing. Along with this mode, there’s another classification, “open,” indicating that an assertion covers some piece of the design, but it hasn’t been verified yet.

Mr. Feist believes that they are the first to be establishing something more solid for assessing formal verification progress. As such, it’s proprietary – the kind of thing that the UCIS project is trying to rationalize. But, assuming he’s right, someone has to be first, and you have to start somewhere, so he feels that this can help to provide some direction for this portion of the UCIS problem.

But, for the time being, this is intended to take their earlier gap identification work and move towards a more formal set of formal coverage metrics. So that you can know where you are, and praying plays less of a role.

 

More info:

OneSpin

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

Enabling IoT with DECT NR+, the Non-Cellular 5G Standard
In the ever-expanding IoT market, there is a growing need for private, low cost networks. In this episode of Chalk Talk, Amelia Dalton and Heidi Sollie from Nordic Semiconductor explore the details of DECT NR+, the world’s first non-cellular 5G technology standard. They investigate how this self-healing, decentralized, autonomous mesh network can help solve a variety of IoT connectivity issues and how Nordic is helping designers take advantage of DECT NR+ with their nRF91 System-in-Package family.
Aug 17, 2023
40,041 views