feature article
Subscribe Now

Are We There Yet?

New Ways to Reduce Verification Time

It’s pretty much a given that the verification of an IC takes longer than the actual design. When presenting their product pitches, companies seem to have stopped spending as much time as they used to motivating the fact that verification is a) important and b) hard. We’ve all come to accept that.

Much of the focus on making verification easier has been on large-scale infrastructural elements like TLM. They’re major changes to how things are done, or they make it possible to do things that weren’t possible before. But recently, there have been some announcements that, by those standards, may seem more modest, and yet, if they meet the expectations they’ve set, should still have a significant impact on the time it takes to check out a chip. Don’t think of them as new methodologies; they’re productivity enhancers that make it easier to use the new methodologies.

So let’s take a look at three of these ideas, each of which addresses a different bottleneck. We’ll look at them in no particular order.

Know the protocol

The existence of third-party IP is predicated on the benefits of not having to design certain parts of a chip yourself. The idea is to save a lot of time by using something that’s already built for you. When it comes to some of the more complex IP for things like protocols or other standards, one of the major benefits is that you no longer have to learn the spec in depth. And, seriously, if you had to design a PCI Express block from scratch, how much time do you think would be dedicated to studying the rather impressive document that tells you what you have to do?

So by using IP, you get to avoid that as you avoid designing the functionality. Of course, it’s not like you plug in the IP and forget about it. You have to verify it, both as a QA check on your IP provider and when verifying the entire system. And you can obtain verification IP that will allow you to perform tests.

And exactly how to you know which tests you need to perform? Why, you study the spec, of course! You know, that step you got to skip because you were buying IP instead of building it yourself.

Yeah… kind of defeats the purpose.

Denali has addressed this with their PureSpec product, which essentially encapsulates the IP spec in a tool. They have gone through the spec and identified all of the “musts,” “shalls,” and “mays” and organized them so that you don’t have to. Instead, you can use their GUI and browse the various tests, selecting which to perform and which to skip on any given run. They tie each test to the actual spec, so, for example, if a test fails, you can click through to the actual text of the standard to see what was supposed to happen.

Now, obviously, if you’re having a really bad day with a really bad piece of IP that keeps sending you back to the spec to learn what’s going wrong, you may end up becoming all too familiar with the functionality. But on an average day, you should have to focus in only on problem areas rather than the entire spec. And on a good day, everything will just work. (Yeah… and then you wake up…)

Becoming assertive

Another high-level verification methodology innovation is the use of assertions. Now you can write assertions into your tests, but, even better yet, you can link to libraries of checkers to avoid having to rewrite common tests.

But according to Zocalo, this hasn’t taken off, largely because it’s still too cumbersome and time consuming to do so. It’s that kind of thing where you can’t quite justify to your boss why you didn’t do it, but, if you were honest, you’d simply say that it’s too much of a pain in the butt.

Zocalo actually divides this world into two: the kinds of assertions that designers put in, which tend to be relatively simple and just check out their obvious functionality, and the ones verification engineers put in, which are much more complex and are intended for comprehensive checkout.

So if designer-level assertions are too messy to bother with, you can imagine that the verification-engineer-level ones would be even worse. Zocalo is trying to address both, although, for the time being, they’ve made an announcement on the designer-level problem with a tool called Zazz, with the verification portion to follow sometime before the end of the year.

The idea here is to help connect designers to libraries in a more straightforward fashion. They’ve done this with a GUI that lets you browse the libraries and then pick and configure checkers to implement the specific checks you want. At this level, you’re just seeing blocks and tests and clicking checkboxes. The tool then does the underlying dirty work of writing whatever cryptic text is required to make it work; the pain-in-the-butt stuff.

When implementing the tests, you can select either to keep the test as a library reference (that is, bind it) or actually instantiate it in the design. It appears that if you do the latter, you can’t edit the test later; if you bind it, you can go in and reconfigure things if you want to make changes.

Their expectation is that, with this simple addition, the barrier to using assertions will drop significantly enough to get designers to use them on a regular basis.

Have they gotten to ZZ Top yet?

Our third entry deals with yet a different verification challenge: coverage. How do you know when your design is really ready for prime time? At what point have you done enough verification? That’s actually not a simple question. Ideally you want to have covered all the functionality of the entire design with tests to have 100% confidence.

But what does “coverage” mean? It’s convenient to use a “structural” measure like having hit all lines of the design. But in theory, you want to have pushed every possible value through every possible point of functionality, in all combinations, legal and illegal (although we can reward good behavior by letting you get away with only values that are feasible).

It’s that kind of coverage that’s practically impossible to get all the way to 100%. You see numbers from 80 to 90 as frequent asymptotes; beyond that you have to ship a product or else you’ll completely miss your market window.

So why can’t you get to 100? Back in the day, automatic test pattern generators looked at designs and created tests specifically intended to exercise nodes. The main focus was simply to ensure that each node wasn’t stuck at a high or low value. But as designs have become monumentally more complex, the computation required to calculate the patterns has long since made that approach appear quaint.

Instead, random shotgun blasts are sprayed at the design under the assumption that if you do that enough, you’ll exercise all the functionality of the design. It’s the equivalent of hiring monkeys with typewriters (OK, keyboards) to produce the Encyclopaedia Britannica. You can constrain the inputs to make sure they stay within a feasible subset, but that’s like giving the monkeys a dictionary so that they use only actual words – it’s still a long way from there to an encyclopedia.

In practice, this technique is what gets you to 80 or so percent coverage. Some nodes and functions are just hard to reach. Some are impossible to reach. For the former, you can keep spraying or even go in and manually craft tests, but that generally proves too time consuming, and so, at some point, you decide that good enough is good enough and move on.

NuSym is addressing this by going back to the old concept of actually looking at the design to figure out how to craft a test for a particular node. Hard-to-hit faults are typically difficult because the conditions causing them to be reached are tortuous and unlikely to be achieved randomly. So instead, NuSym’s technology traces a path backwards to calculate deterministically how the fault can be activated. Not surprisingly, they refer to this as “path tracing.”

The tool also “learns” about the design as it does this, so by applying a few passes, it can build up a minimal set of tests that can, by design, hit reachable faults. And here’s the other bit: for those faults that aren’t reachable, it will show you why you can’t get there. With that information, you can either change the design to make them reachable, or you can decide conclusively that they can never be reached in a real scenario, and therefore you don’t have to worry about them anymore. Either approach will boost your confidence in the readiness of your design for production.

NuSym isn’t touting its tool as competing with existing methodologies; they’re positioning it as an enhancement that can dramatically reduce the time it takes to close the gap once you’ve spent enough time on the constrained random tests.

By comparison to some of the grander verification innovations of the last decade, these might seem more modest. And yet it would appear that there are significant time savings to be gleaned. And anything that reduces the time to verify while actually improving the quality of the design has to be a good thing.

Links:

Denali PureSpec

NuSym

Zocalo

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadenceā€™s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

GaN Solutions Featuring EcoGaNā„¢ and Nano Pulse Control
In this episode of Chalk Talk, Amelia Dalton and Kengo Ohmori from ROHM Semiconductor examine the details and benefits of ROHM Semiconductorā€™s new lineup of EcoGaNā„¢ Power Stage ICs that can reduce the component count by 99% and the power loss of your next design by 55%. They also investigate ROHMā€™s Ultra-High-Speed Control IC Technology called Nano Pulse Control that maximizes the performance of GaN devices.
Oct 9, 2023
25,652 views