feature article
Subscribe Now

Passing the Test

Vennsa Tries to Figure Out Who Screwed Up

Several years ago, while renting a vehicle for an event I was going to attend, the rental guy pointed out that my driver’s license had expired a couple months prior. So he couldn’t rent me the vehicle. My wife at the time bailed me out, but I decided to postpone my departure by a day to avoid the risk of getting pulled over with no license to show. Which meant an emergency trip to the DMV.

I found that I could get licensed quickly, but I had to take the written test in order to do so – something I hadn’t taken (or studied for) since I was a teenager. So, with no preparation, I went off and answered the questions, knowing that these tests have a history of being flaky, and nervous that my ability to drive legally lay in the balance.

I don’t remember what the numbers are, but you aren’t allowed to miss many. And I missed one too many. And I asked about the questions, and there was one in particular that stood out:

“In which of the following places is it not OK to park?”

I don’t remember two of the answers (they were obviously not it), but the other two were:

“In a parking space with striped lines in it”

“In a bike lane”

Now, I couldn’t figure out what a parking spot with striped lines in it was – I couldn’t remember ever having seen one, and, since I hadn’t read the manual in decades, that choice was lost on me. But I was pretty damn sure you can’t park in a bike lane, which was good, since that meant that it was the answer and I didn’t have to worry about the other one.

Wrong.

It turns out that the parking space with the stripes in it is the area next to a handicapped parking area that’s marked off for the wheelchair to have room to maneuver in. To me, that was “not a parking space,” but was a non-parking area striped off so as not to be confused with a parking space. It wasn’t “a parking space with stripes.” Well, that’s not how the DMV saw it.

Fortunately, as I talked to the grader, I said, “But you can’t park in a bike lane, can you?” And she emphatically agreed, “Oh, no.” And so, bless her heart, she gave me that one. And I drove away. Legally.

But it raises the point that tests are complicated things. When something fails, it’s not always obvious what the problem is. The only thing you really know is that there’s a problem. In this case, there were three possible sources of the problem:

  • The question could have been faulty or ambiguous
  • The answer key could have been incorrect
  • I may have simply gotten the wrong answer

You can test your chip designs as well during your verification cycle. You pose questions through the testbench by stimulating the design to see what answer it gives. You use assertions to act as the answer key and flag when the answer is wrong. And, presumably, if there’s a mistake in the design, it will be identified.

So, just like the DMV’s test, when an answer is wrong, it could actually be an indication of one of three things:

  • A testbench problem
  • An assertion problem
  • A problem with the design

Debugging each instance of an assertion firing can get tiresome. And it can be downright mind-boggling if complex assertions and/or logic are involved. So to address this challenge, a new company named Vennsa has launched a tool called OnPoint that is supposed to do a lot of that debugging work for you.

Theoretically, automating this kind of debugging is easy. You take the cone of influence and perturb each contributor to it to see what happens. And you perturb all combinations (and permutations, if that matters) and check the results. And any that give the observed failure become candidates for the root cause of the problem, understanding that the problem could be in the testbench or assertion as well as the design.

And this works fine if you’ve got all the time in the world to explore this exploding solution space. Which most of us don’t. What Vennsa has brought to the party is apparently a clever approach to keeping that solution space tractable. The result is typically a dozen or so root cause suspects per issue.

These suspects are then ranked. Vennsa has a number of considerations that go into the rankings (and they can’t resist the temptation to compare themselves to Google – something I’m sure the VCs like, since you have to say you’re “the Google of …” or “the eBay of …” or something like that to get their attention). For example, in an intuitive reversal of Occam’s Razor, they rank more complicated suspects higher than simple ones. In other words, if the area in question is complex, it’s more likely that there’s something wrong there.

Then, along with each suspect, comes a suggested fix, proffered via waveform. These fixes have already been vetted to guarantee that doing any of them won’t cause any other assertion along that simulation trajectory to fail. This goes part way towards avoiding a whack-a-mole problem where one fix causes another problem. But it doesn’t eliminate the need to re-verify the design as a whole after the fix is in place to ensure that it didn’t screw something up further afield.

The fixes suggested could involve the testbench, the assertion, or the design. It isn’t simply assumed that, just because the assertion fired, there is indeed a problem with the design.

They accomplish all of this with a combination of technologies, including, by their description, formal (which makes up about 80% of what goes on), along with sat solvers, binary decision trees, and other computational arcana. They can work with a variety of simulators and formal tools that are in the verification path. Those are the tools testing the design; OnPoint is the tool testing the errors.

In a perfect world, tests are always clear and unambiguous, the answer keys are always correct, and the test-taker is the only unknown. Actually, in a truly perfect world, the test-taker also has perfect knowledge and would never fail a test. But we don’t live in that world (well, Chuck Norris does, but none of us do). Given that unfortunate reality, Vennsa is hoping to help manage the challenge of figuring out what went wrong when something goes wrong.

Now…  whether they’d be able to bring order to the DMV, well, that’s quite a different question…

 

More info:  Vennsa

 

Leave a Reply

featured blogs
Oct 9, 2024
Have you ever noticed that dogs tend to circle around a few times before they eventually take a weight off their minds?...

featured chalk talk

Vector Funnel Methodology for Power Analysis from Emulation to RTL to Signoff
Sponsored by Synopsys
The shift left methodology can help lower power throughout the electronic design cycle. In this episode of Chalk Talk, William Ruby from Synopsys and Amelia Dalton explore the biggest energy efficiency design challenges facing engineers today, how Synopsys can help solve a variety of energy efficiency design challenges and how the shift left methodology can enable consistent power efficiency and power reduction.
Jul 29, 2024
40,398 views