editor's blog
Subscribe Now

More Efficient Vectors

In the wake of the UCIS announcement at DAC (which we’ll cover separately later), I sat down with some of Mentor’s functional verification folks to get an update. Coverage was one of the items on their agenda as part of addressing metric-driven verification.

They talk in terms of changing the engineering mindset when it comes to evaluating verification tools. Right now engineers tend to think in terms of “cycles/second”: how fast can you blaze through these vectors? Mentor is trying to change that thought process to “coverage/cycle”: it’s ok to take longer per cycle (OK, actually, they didn’t explicitly say that – probably a bit dodgy territory from a marketing standpoint – and I don’t know whether they’re solution is any slower on a per-cycle basis – but I’m inferring here…) as long as you get coverage faster. In other words, maybe one tool can zip through a bazillion vectors in three hours, but it’s better to have a tool that only needs a half-bazillion vectors and completes in two hours (slower on a per-vector basis, but faster overall completion).

Part of this is handled by their InFact “intelligent testbench.” They try to solve two problems with it, as I see it. First, there are hard-to-reach states in any design; the tool builds a graph of the design for use in identifying trajectories. From that, they should be able to reach any reachable state with the fewest vectors possible. Which is fine when testing just that one state.

But the second thing they do is what would appear to be their own variation of the “traveling salesman” problem. How do you traverse the graph to get to all the nodes without repeating any path? (The canonical traveling salesman problem is about not repeating any node and ending back where you started.) The idea is to get full coverage with as few vectors as possible. This gets specifically to the “coverage/cycle” metric.

Which reinforces the old truth that simply having and rewarding metrics doesn’t necessarily help things. It’s too easy to have the wrong metrics – which will be attained and for which rewards will be paid – and not improve life. Because they’re the wrong metrics.

Perhaps MDV should be modified to UMDV: Useful-Metric-Driven Verification. Of course, then we’ll get to watch as companies battle over which metrics are useful. But that could make for entertaining viewing too…

Leave a Reply

featured blogs
May 21, 2019
Today's blog highlights the features of the new Multi-Test Editor that is now available in ADE Assembler. With this blog, we have come to the end of the mini blog series that covered a lot of... [[ Click on the title to access the full blog on the Cadence Community site...
May 20, 2019
The Loch Ness Monster? There’s that old, grainy picture. Big Foot? I’ve seen the video. UFO’s? Who knows? But micro, rugged industrial connectors? Come on, that’s really hard to believe … But, industrial electronics is Samtec'€™s largest sellin...
May 20, 2019
Next week – Wednesday 29 May – I am presenting a technical webinar, looking at multicore issue. In particular, we will be looking at a number of benefits of a multicore design including how it can help reduce certification costs and effort '€¦ Here is the abstra...
Jan 25, 2019
Let'€™s face it: We'€™re addicted to SRAM. It'€™s big, it'€™s power-hungry, but it'€™s fast. And no matter how much we complain about it, we still use it. Because we don'€™t have anything better in the mainstream yet. We'€™ve looked at attempts to improve conven...