feature article
Subscribe Now

Passing the Test

Vennsa Tries to Figure Out Who Screwed Up

Several years ago, while renting a vehicle for an event I was going to attend, the rental guy pointed out that my driver’s license had expired a couple months prior. So he couldn’t rent me the vehicle. My wife at the time bailed me out, but I decided to postpone my departure by a day to avoid the risk of getting pulled over with no license to show. Which meant an emergency trip to the DMV.

I found that I could get licensed quickly, but I had to take the written test in order to do so – something I hadn’t taken (or studied for) since I was a teenager. So, with no preparation, I went off and answered the questions, knowing that these tests have a history of being flaky, and nervous that my ability to drive legally lay in the balance.

I don’t remember what the numbers are, but you aren’t allowed to miss many. And I missed one too many. And I asked about the questions, and there was one in particular that stood out:

“In which of the following places is it not OK to park?”

I don’t remember two of the answers (they were obviously not it), but the other two were:

“In a parking space with striped lines in it”

“In a bike lane”

Now, I couldn’t figure out what a parking spot with striped lines in it was – I couldn’t remember ever having seen one, and, since I hadn’t read the manual in decades, that choice was lost on me. But I was pretty damn sure you can’t park in a bike lane, which was good, since that meant that it was the answer and I didn’t have to worry about the other one.

Wrong.

It turns out that the parking space with the stripes in it is the area next to a handicapped parking area that’s marked off for the wheelchair to have room to maneuver in. To me, that was “not a parking space,” but was a non-parking area striped off so as not to be confused with a parking space. It wasn’t “a parking space with stripes.” Well, that’s not how the DMV saw it.

Fortunately, as I talked to the grader, I said, “But you can’t park in a bike lane, can you?” And she emphatically agreed, “Oh, no.” And so, bless her heart, she gave me that one. And I drove away. Legally.

But it raises the point that tests are complicated things. When something fails, it’s not always obvious what the problem is. The only thing you really know is that there’s a problem. In this case, there were three possible sources of the problem:

  • The question could have been faulty or ambiguous
  • The answer key could have been incorrect
  • I may have simply gotten the wrong answer

You can test your chip designs as well during your verification cycle. You pose questions through the testbench by stimulating the design to see what answer it gives. You use assertions to act as the answer key and flag when the answer is wrong. And, presumably, if there’s a mistake in the design, it will be identified.

So, just like the DMV’s test, when an answer is wrong, it could actually be an indication of one of three things:

  • A testbench problem
  • An assertion problem
  • A problem with the design

Debugging each instance of an assertion firing can get tiresome. And it can be downright mind-boggling if complex assertions and/or logic are involved. So to address this challenge, a new company named Vennsa has launched a tool called OnPoint that is supposed to do a lot of that debugging work for you.

Theoretically, automating this kind of debugging is easy. You take the cone of influence and perturb each contributor to it to see what happens. And you perturb all combinations (and permutations, if that matters) and check the results. And any that give the observed failure become candidates for the root cause of the problem, understanding that the problem could be in the testbench or assertion as well as the design.

And this works fine if you’ve got all the time in the world to explore this exploding solution space. Which most of us don’t. What Vennsa has brought to the party is apparently a clever approach to keeping that solution space tractable. The result is typically a dozen or so root cause suspects per issue.

These suspects are then ranked. Vennsa has a number of considerations that go into the rankings (and they can’t resist the temptation to compare themselves to Google – something I’m sure the VCs like, since you have to say you’re “the Google of …” or “the eBay of …” or something like that to get their attention). For example, in an intuitive reversal of Occam’s Razor, they rank more complicated suspects higher than simple ones. In other words, if the area in question is complex, it’s more likely that there’s something wrong there.

Then, along with each suspect, comes a suggested fix, proffered via waveform. These fixes have already been vetted to guarantee that doing any of them won’t cause any other assertion along that simulation trajectory to fail. This goes part way towards avoiding a whack-a-mole problem where one fix causes another problem. But it doesn’t eliminate the need to re-verify the design as a whole after the fix is in place to ensure that it didn’t screw something up further afield.

The fixes suggested could involve the testbench, the assertion, or the design. It isn’t simply assumed that, just because the assertion fired, there is indeed a problem with the design.

They accomplish all of this with a combination of technologies, including, by their description, formal (which makes up about 80% of what goes on), along with sat solvers, binary decision trees, and other computational arcana. They can work with a variety of simulators and formal tools that are in the verification path. Those are the tools testing the design; OnPoint is the tool testing the errors.

In a perfect world, tests are always clear and unambiguous, the answer keys are always correct, and the test-taker is the only unknown. Actually, in a truly perfect world, the test-taker also has perfect knowledge and would never fail a test. But we don’t live in that world (well, Chuck Norris does, but none of us do). Given that unfortunate reality, Vennsa is hoping to help manage the challenge of figuring out what went wrong when something goes wrong.

Now…  whether they’d be able to bring order to the DMV, well, that’s quite a different question…

 

More info:  Vennsa

 

Leave a Reply

featured blogs
Aug 18, 2018
Once upon a time, the Santa Clara Valley was called the Valley of Heart'€™s Delight; the main industry was growing prunes; and there were orchards filled with apricot and cherry trees all over the place. Then in 1955, a future Nobel Prize winner named William Shockley moved...
Aug 17, 2018
Samtec’s growing portfolio of high-performance Silicon-to-Silicon'„¢ Applications Solutions answer the design challenges of routing 56 Gbps signals through a system. However, finding the ideal solution in a single-click probably is an obstacle. Samtec last updated the...
Aug 17, 2018
If you read my post Who Put the Silicon in Silicon Valley? then you know my conclusion: Let's go with Shockley. He invented the transistor, came here, hired a bunch of young PhDs, and sent them out (by accident, not design) to create the companies, that created the compa...
Aug 16, 2018
All of the little details were squared up when the check-plots came out for "final" review. Those same preliminary files were shared with the fab and assembly units and, of course, the vendors have c...
Jul 30, 2018
As discussed in part 1 of this blog post, each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store its configuration bits. The time required to pr...