feature article
Subscribe Now

Are We There Yet?

New Ways to Reduce Verification Time

It’s pretty much a given that the verification of an IC takes longer than the actual design. When presenting their product pitches, companies seem to have stopped spending as much time as they used to motivating the fact that verification is a) important and b) hard. We’ve all come to accept that.

Much of the focus on making verification easier has been on large-scale infrastructural elements like TLM. They’re major changes to how things are done, or they make it possible to do things that weren’t possible before. But recently, there have been some announcements that, by those standards, may seem more modest, and yet, if they meet the expectations they’ve set, should still have a significant impact on the time it takes to check out a chip. Don’t think of them as new methodologies; they’re productivity enhancers that make it easier to use the new methodologies.

So let’s take a look at three of these ideas, each of which addresses a different bottleneck. We’ll look at them in no particular order.

Know the protocol

The existence of third-party IP is predicated on the benefits of not having to design certain parts of a chip yourself. The idea is to save a lot of time by using something that’s already built for you. When it comes to some of the more complex IP for things like protocols or other standards, one of the major benefits is that you no longer have to learn the spec in depth. And, seriously, if you had to design a PCI Express block from scratch, how much time do you think would be dedicated to studying the rather impressive document that tells you what you have to do?

So by using IP, you get to avoid that as you avoid designing the functionality. Of course, it’s not like you plug in the IP and forget about it. You have to verify it, both as a QA check on your IP provider and when verifying the entire system. And you can obtain verification IP that will allow you to perform tests.

And exactly how to you know which tests you need to perform? Why, you study the spec, of course! You know, that step you got to skip because you were buying IP instead of building it yourself.

Yeah… kind of defeats the purpose.

Denali has addressed this with their PureSpec product, which essentially encapsulates the IP spec in a tool. They have gone through the spec and identified all of the “musts,” “shalls,” and “mays” and organized them so that you don’t have to. Instead, you can use their GUI and browse the various tests, selecting which to perform and which to skip on any given run. They tie each test to the actual spec, so, for example, if a test fails, you can click through to the actual text of the standard to see what was supposed to happen.

Now, obviously, if you’re having a really bad day with a really bad piece of IP that keeps sending you back to the spec to learn what’s going wrong, you may end up becoming all too familiar with the functionality. But on an average day, you should have to focus in only on problem areas rather than the entire spec. And on a good day, everything will just work. (Yeah… and then you wake up…)

Becoming assertive

Another high-level verification methodology innovation is the use of assertions. Now you can write assertions into your tests, but, even better yet, you can link to libraries of checkers to avoid having to rewrite common tests.

But according to Zocalo, this hasn’t taken off, largely because it’s still too cumbersome and time consuming to do so. It’s that kind of thing where you can’t quite justify to your boss why you didn’t do it, but, if you were honest, you’d simply say that it’s too much of a pain in the butt.

Zocalo actually divides this world into two: the kinds of assertions that designers put in, which tend to be relatively simple and just check out their obvious functionality, and the ones verification engineers put in, which are much more complex and are intended for comprehensive checkout.

So if designer-level assertions are too messy to bother with, you can imagine that the verification-engineer-level ones would be even worse. Zocalo is trying to address both, although, for the time being, they’ve made an announcement on the designer-level problem with a tool called Zazz, with the verification portion to follow sometime before the end of the year.

The idea here is to help connect designers to libraries in a more straightforward fashion. They’ve done this with a GUI that lets you browse the libraries and then pick and configure checkers to implement the specific checks you want. At this level, you’re just seeing blocks and tests and clicking checkboxes. The tool then does the underlying dirty work of writing whatever cryptic text is required to make it work; the pain-in-the-butt stuff.

When implementing the tests, you can select either to keep the test as a library reference (that is, bind it) or actually instantiate it in the design. It appears that if you do the latter, you can’t edit the test later; if you bind it, you can go in and reconfigure things if you want to make changes.

Their expectation is that, with this simple addition, the barrier to using assertions will drop significantly enough to get designers to use them on a regular basis.

Have they gotten to ZZ Top yet?

Our third entry deals with yet a different verification challenge: coverage. How do you know when your design is really ready for prime time? At what point have you done enough verification? That’s actually not a simple question. Ideally you want to have covered all the functionality of the entire design with tests to have 100% confidence.

But what does “coverage” mean? It’s convenient to use a “structural” measure like having hit all lines of the design. But in theory, you want to have pushed every possible value through every possible point of functionality, in all combinations, legal and illegal (although we can reward good behavior by letting you get away with only values that are feasible).

It’s that kind of coverage that’s practically impossible to get all the way to 100%. You see numbers from 80 to 90 as frequent asymptotes; beyond that you have to ship a product or else you’ll completely miss your market window.

So why can’t you get to 100? Back in the day, automatic test pattern generators looked at designs and created tests specifically intended to exercise nodes. The main focus was simply to ensure that each node wasn’t stuck at a high or low value. But as designs have become monumentally more complex, the computation required to calculate the patterns has long since made that approach appear quaint.

Instead, random shotgun blasts are sprayed at the design under the assumption that if you do that enough, you’ll exercise all the functionality of the design. It’s the equivalent of hiring monkeys with typewriters (OK, keyboards) to produce the Encyclopaedia Britannica. You can constrain the inputs to make sure they stay within a feasible subset, but that’s like giving the monkeys a dictionary so that they use only actual words – it’s still a long way from there to an encyclopedia.

In practice, this technique is what gets you to 80 or so percent coverage. Some nodes and functions are just hard to reach. Some are impossible to reach. For the former, you can keep spraying or even go in and manually craft tests, but that generally proves too time consuming, and so, at some point, you decide that good enough is good enough and move on.

NuSym is addressing this by going back to the old concept of actually looking at the design to figure out how to craft a test for a particular node. Hard-to-hit faults are typically difficult because the conditions causing them to be reached are tortuous and unlikely to be achieved randomly. So instead, NuSym’s technology traces a path backwards to calculate deterministically how the fault can be activated. Not surprisingly, they refer to this as “path tracing.”

The tool also “learns” about the design as it does this, so by applying a few passes, it can build up a minimal set of tests that can, by design, hit reachable faults. And here’s the other bit: for those faults that aren’t reachable, it will show you why you can’t get there. With that information, you can either change the design to make them reachable, or you can decide conclusively that they can never be reached in a real scenario, and therefore you don’t have to worry about them anymore. Either approach will boost your confidence in the readiness of your design for production.

NuSym isn’t touting its tool as competing with existing methodologies; they’re positioning it as an enhancement that can dramatically reduce the time it takes to close the gap once you’ve spent enough time on the constrained random tests.

By comparison to some of the grander verification innovations of the last decade, these might seem more modest. And yet it would appear that there are significant time savings to be gleaned. And anything that reduces the time to verify while actually improving the quality of the design has to be a good thing.

Links:

Denali PureSpec

NuSym

Zocalo

Leave a Reply

featured blogs
Oct 24, 2024
This blog describes how much memory WiFi IoT devices actually need, and how our SiWx917M Wi-Fi 6 SoCs respond to IoT developers' call for more memory....
Nov 1, 2024
Self-forming mesh networking capability is a fundamental requirement for the Firefly project, but Arduino drivers don't exist (sad face)...

featured chalk talk

Industrial Internet of Things
Sponsored by Mouser Electronics and CUI Inc.
In this episode of Chalk Talk, Amelia Dalton and Bruce Rose from CUI Inc explore power supply design concerns associated with IIoT applications. They investigate the roles that thermal conduction and convection play in these power supplies and the benefits that CUI Inc. power supplies bring to these kinds of designs.
Aug 16, 2024
47,074 views