feature article
Subscribe Now

Showdown at the “Is My Design OK?” Corral

There’s a battle shaping up as yet another entrant into the assertion synthesis field makes some noise this week.

We’ve kept an eye on assertion synthesis over the last year or so. Tools and methodology vendors lament that assertion-based verification has been slow to catch on. Some say that’s because it’s too hard to generate and work with assertions. Hence tools to make that easier.

Last summer we reported on NextOp’s entry into the fray. We also noted that their approach was qualitatively different from that of another player, Zocalo, in that Zocalo enables assertion synthesis before the design is done; NextOp generates assertions from RTL already written.

This week, Jasper is tossing their hat into the ring as well. And, unlike Zocalo and NextOp, Jasper not only generates, but also processes assertions using their formal technology.

Jasper joins NextOp in their approach to synthesis: they do it after the RTL has been created. To be specific, Jasper refers to this as property synthesis, since they can create assertions (generated at block outputs), constraints (generated at block inputs), and cover properties.

Now, it would be really nice if you could just toss your design to a tool and say, “Wake me when you’re done.” Problem is, the tools can only guess at what is going on in the design. They may be able to make some good guesses, but they’re only inferences, and, as the repository of knowledge of what’s really going on, you have to help the tool out.

So Jasper describes a basic three-step process for getting properties going.

The first step involves gathering all known information about the design. Mostly this is the RTL, but you can bring the testbench and simulation waveforms into the picture as well, all of which help establish intent. From here you identify the RTL block signals and tell the tool which things you’d like to look at. Jasper refers to these as “points of interest” – POIs. These POIs are tracked throughout the tool as specific reference points.

Once the tool knows how the design is defined, you can then run ActiveProp to generate a set of candidate properties. You do this either by running the simulation in advance and then providing the results to ActiveProp, or you can actually use PLI to have ActiveProp “watch” the simulation as it happens.

What’s going on here is that ActiveProp looks at the design and then at the simulation results and decides, “OK, based on what I’m seeing, here’s what I think the overall design intent is.”

This is a critical step. The whole idea of formal verification is that you can prove things without having to exercise every possible scenario in a simulation. If you’re testing a counter, it’s easier to prove that n always leads to n+1 than it is to run through every possible value of n to show that it always works. So if, between the RTL and the behavior during simulation, the tool can see that there are enough cases of n leading to n+1, it can generalize the higher-level rule. From that rule you can prove the general case rather than simulating the missing specific cases.

A particular simulation run may also be misleading. For example, it may show a counter only increasing its count. The inference there could be either that it should be an up-counter or that you’ve forgotten to test down-counting. Jasper allows subsequent simulations to supplement the existing data set; you can use this to fill in holes and improve the quality of inference and extrapolation that the tool makes.

But, of course, the tool is making assumptions. Ordinarily, when you and I assume, we make the proverbial donkey out of you and me. But when you have a tool doing the assuming, then only you are at risk of looking ass-inine. So you have to confirm the tool’s decisions.

Which is the third step in the process. The properties are all listed, and you can explicitly disposition them as “certified,” “uncertain,” and “don’t care.”

All of this is well and good, but, given an arbitrary design, a bunch of stimulus results, and a non-human reasoning machine with no a priori knowledge of design intent, you can imagine that the tool might imagine a huge number of different design characteristics that may or may not be relevant or important. It would then be up to you to wade through this morass to make sense out of it and to decide which things to keep and which to jettison.

So the ability for a tool to do a better job than its competitor of generating good properties would be a critical differentiator. The gold standard would be a tool that could infer the necessary and sufficient set of properties that were neither under- nor over-scoped. That, of course, is a tall order.

And it seems to me that this is where the battle lines will be drawn. Zocalo is different in that they’re mostly making manual property creation much easier, so it doesn’t have this issue. But because NextOp and Jasper infer properties from the design automatically, this could be a key point of competition. Jasper might argue that they have an edge because of their integration with their formal engines, but it’s entirely likely that users would sacrifice that integration if necessary for the sake of better properties or better formal. So Jasper can count on that only if they’ve done their homework both on quality of properties inferred and on formal analysis.

Jasper claims to have a high “signal-to-noise” ratio, suggesting that the vast bulk of the properties they generate are relevant and properly extrapolated. They also classify, rank, and merge properties to give a better starting point as you wade in to examine what ActiveProp generated.

Jasper also claims that ActiveProp can generate complex multi-cycle properties, which is a bit of a fuzzy claim. It’s one thing to generate properties that span a couple cycles; it’s much harder if the property deals with a complex protocol that has several signals interacting over the course of many cycles.

Multi-cycle coverage, then, can form another aspect of the quality of properties generated. Which leads to a potential challenge for users: how do you measure the overall quality of a set of properties? How would you benchmark two tools? This whole domain is a relatively amorphous blob, and it’s not clear that benchmark standards exist.

So, even as we watch the combatants’ hands hover by their holsters, loosening their figures and staring each other down, the rules of engagement are not at all clear. Perhaps they will materialize organically as the technology matures. If not, then, if history is any guide, these guys are likely to define their own self-serving ones, leaving it to users to figure out who the real winner is.

Leave a Reply

featured blogs
May 7, 2021
In one of our Knowledge Booster Blogs a few months ago we introduced you to some tips and tricks for the optimal use of Virtuoso ADE Product Suite with our analog IC design videos . W e hope you... [[ Click on the title to access the full blog on the Cadence Community site. ...
May 7, 2021
Enough of the letter “P” already. Message recieved. In any case, modeling and simulating next-gen 224 Gbps signal channels poses many challenges. Design engineers must optimize the entire signal path, not just a specific component. The signal path includes transce...
May 6, 2021
Learn how correct-by-construction coding enables a more productive chip design process, as new code review tools address bugs early in the design process. The post Find Bugs Earlier Via On-the-Fly Code Checking for Productive Chip Design and Verification appeared first on Fr...
May 4, 2021
What a difference a year can make! Oh, we're not referring to that virus that… The post Realize Live + U2U: Side by Side appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Bring a "Can-Do" Attitude to Your Industrial Drone Design

Sponsored by Maxim Integrated

Providing predictable and error-free communications, CAN bus networks have been the workhorse of the automobile industry for over thirty years. But they have recently found a new lease on life in other industrial applications, including drones. This design solution shows where and how CAN transceivers can be used in drone designs and explains why it is important that they come with high levels of electrical protection.

Click to read more

featured chalk talk

Nordic Cellular IoT

Sponsored by Mouser Electronics and Nordic Semiconductor

Adding cellular connectivity to your IoT design is a complex undertaking, requiring a broad set of engineering skills and expertise. For most teams, this can pose a serious schedule challenge in getting products out the door. In this episode of Chalk Talk, Amelia Dalton chats with Kristian Sæther of Nordic Semiconductor about the easiest path to IoT cellular connectivity with the Nordic nRF9160 low-power system-in-package solution.

Click here for more information about Nordic Semiconductor nRF91 Cellular IoT Modules