editor's blog
Subscribe Now

More Efficient Vectors

In the wake of the UCIS announcement at DAC (which we’ll cover separately later), I sat down with some of Mentor’s functional verification folks to get an update. Coverage was one of the items on their agenda as part of addressing metric-driven verification.

They talk in terms of changing the engineering mindset when it comes to evaluating verification tools. Right now engineers tend to think in terms of “cycles/second”: how fast can you blaze through these vectors? Mentor is trying to change that thought process to “coverage/cycle”: it’s ok to take longer per cycle (OK, actually, they didn’t explicitly say that – probably a bit dodgy territory from a marketing standpoint – and I don’t know whether they’re solution is any slower on a per-cycle basis – but I’m inferring here…) as long as you get coverage faster. In other words, maybe one tool can zip through a bazillion vectors in three hours, but it’s better to have a tool that only needs a half-bazillion vectors and completes in two hours (slower on a per-vector basis, but faster overall completion).

Part of this is handled by their InFact “intelligent testbench.” They try to solve two problems with it, as I see it. First, there are hard-to-reach states in any design; the tool builds a graph of the design for use in identifying trajectories. From that, they should be able to reach any reachable state with the fewest vectors possible. Which is fine when testing just that one state.

But the second thing they do is what would appear to be their own variation of the “traveling salesman” problem. How do you traverse the graph to get to all the nodes without repeating any path? (The canonical traveling salesman problem is about not repeating any node and ending back where you started.) The idea is to get full coverage with as few vectors as possible. This gets specifically to the “coverage/cycle” metric.

Which reinforces the old truth that simply having and rewarding metrics doesn’t necessarily help things. It’s too easy to have the wrong metrics – which will be attained and for which rewards will be paid – and not improve life. Because they’re the wrong metrics.

Perhaps MDV should be modified to UMDV: Useful-Metric-Driven Verification. Of course, then we’ll get to watch as companies battle over which metrics are useful. But that could make for entertaining viewing too…

Leave a Reply

featured blogs
Nov 13, 2019
At the third stroke of midnight on 30 September 2019, Australia's talking clock fell silent....
Nov 13, 2019
By Elven Huang – Mentor, A Siemens Business SRAM debugging at advanced nodes is challenging. With pattern matching and similarity checking, Calibre tools enable designers to more quickly and precisely locate SRAM modification errors and determine the correct fix. Static...
Nov 13, 2019
Decisions, Decisions … I may be in the market for a new car in the near future. Unless you'€™ve got a strong preference (and most car buyers DO have a strong preference, IMO), choosing a vehicle is a series of trade-offs.  Fuel efficiency vs. horsepower. Functionali...
Nov 13, 2019
One of the big trends that has been happening somewhat below the radar is the growth of various forms of 3D packaging. I noted this at HOT CHIPS in summer, when a big percentage of the designs were... [[ Click on the title to access the full blog on the Cadence Community sit...
Nov 8, 2019
[From the last episode: we looked at the differences between computing at the edge and in the cloud.] We'€™ve looked at the differences between MCUs and SoCs, but the one major thing that they have in common is that they have a CPU. Now'€¦ anyone can define their own CPU ...