feature article
Subscribe Now

Going Back to the Formal

Property Checking Puts a New Shine on Formal Verification

Back a decade or so ago, there was a big hoo-hah about “formal verification.” Things burned hot for a while, then cooled down and more or less slipped away from view for most of us. And some of us may have been left with the impression that it was a good idea, perhaps, theoretically, but that practical realities got in the way of it being broadly useful. Now suddenly formal verification is getting some attention again. Which raises the question, what’s changed?

The first thing we need to tackle is, what does “formal verification” mean? Early on, there was a focus on proving, for example, that two circuits were equivalent. This is more specifically referred to as “equivalence checking” for obvious reasons. So where do you want to prove equivalence? Well, pretty much anywhere a circuit goes through a transformation and where you want to be able to sleep knowing that the transformation worked properly.

The poster child for this kind of verification was post-synthesis verification – it’s nice to know that the circuit created by synthesis is indeed what was specified. Of course, there was that nagging problem that no one seemed to ask much in public – what happens if it fails? That would pretty much mean that the synthesis tool (or whatever other tool was providing whatever transformation) had a bug. That’s not really a design failure. Not much you can do except file a bug report, have a few conversations with a call center somewhere half-way around the globe, and hope that it gets fixed. Such equivalence checking is still prevalent, but thanks to maturing tools, hopefully such failures are infrequent. This then relegates equivalence checking to the crossing-the-Ts-and-sleeping-well-at-night sign-off process.

But around the same time, another flavor appeared, and it was referred to as “model checking.” The idea here is that you describe the intended behavior of a circuit and then prove that your implementation behaves in a way that’s consistent with or identical to the model. But at the time, only very small models and circuits could be managed due to the amount of computation it took to churn through the proofs, and so uptake was modest at best.

Now there appears to be a new kid in town. It’s called “property checking.” Start talking to someone about it and it starts sounding suspiciously like model checking. Call them on that, and what you find is that for some reason “model checking” appears to be a phrase people are distancing themselves from – some say it’s the “academic” name for the concept. So what is property checking? It’s model checking. Only now it works. Or something like that. There’s also apparently some desire to distinguish it from the different concept of transaction-level models. Or perhaps to separate it from some of the less successful forays of yore. Today you won’t find model checking, you’ll find property checking. Whatever.

Another kind of verification you’ll hear about is “assertion-based verification” (ABV). Start digging into this, and it starts to sound suspiciously like property checking. Which sounds a lot like model checking. And an assertion sounds an awful lot like a property. If there’s a difference, it hasn’t been obvious to this babe-in-the-marketing-woods reporter. So, applying transitivity, we have the following relation: (model checking) ≈ (property checking) ≈ (assertion-based verification). That will be on the final.

The whole model – er – property checking paradigm can operate a couple of different ways. You can sprinkle your design implementation code with assertions that describe the design intent. You’re basically saying, “OK, here’s a bit of circuitry I just designed, and here’s what I was trying to do. Please confirm that I did it right.” And you’re potentially doing it at a low level, with assertions placed near the implementations of the behavior they describe. The other way to approach it is with separate models for complete circuits – very possibly segregated from the design implementation. This complete model (hopefully complete – more on that in a minute) can then be used to validate the implementation. There are two primary languages used for the assertions, PSL (Property Specification Language) and SVA (SystemVerilog Assertions).

What’s it good for?

So, given all of this, what can one do with property checking? Well, obviously, verification is one major use. As designs get larger, it becomes more difficult to write enough vectors to make sure you have covered all the potential corners of your design. Property checking has the potential to provide more complete coverage.

Alternatively, if simulation is used and an error is discovered, property checking can be used to isolate the bug and verify a fix – “what if” analysis is touted as a benefit. Alternatively, you might use it to answer the question, “How can I ensure that this line of code is hit during validation?” These represent more interactive, rather than batch mode, uses.

Another use is in the exploration and validation of old moldy code that you may have inherited from deep within the bowels of the company, designs that haven’t seen the light of day since before man learned to spell “verification,” and whose designers, even if known, are long gone, running off-the-grid B&Bs in the Bitterroot Mountains. Formal verification can help explore the behavior of such a mystery “gray box,” if that behavior is unknown, or at the very least validate that this thing does what it says it does.

One of the challenges of writing assertions is that the languages, while powerful, aren’t exactly intuitive when expressing complex behavior. Waveform input is becoming more prevalent as a means of describing assertions. Here you describe the behavior of inputs – only the inputs that matter, and only when they matter – and the expected response of any outputs that depend on those inputs. Simple cause and effect. This becomes one assertion, or property. An example might be: “When the Request signal goes high and the Grant signal is not high, then the Grant signal should go high two cycles later.” The Request and Grant signals will be specified only in the first cycle, since in this example, their value in any other cycle is “don’t care.”

The whole “don’t care” thing is important, since you DON’T want to have to specify all combinations of all signals in all cycles – the idea is to say what is intended, not provide an exhaustive description of all states in the design (which you could never do). This is more than just a convenience – implementations of a design may treat “don’t-care” states differently, and even though you don’t care how things turn out, they will turn out one way or another. By specifying “don’t care”, the verification won’t care. If you try to be more specific, in addition to wasting your time and your employer’s money, you may end up wrong if you handle don’t cares differently from your implementation tools. So go ahead, be a nihilist; don’t care.

Reducing the space

One of the challenges of formal (those of us that feel a little closer to the technology can take the liberty of being a bit more informal, and leave off the “verification”) is the amount of computation required. Let’s say you’ve got an SoC circuit implementing all of the functionality of a BlackBerry in a single chip. And you’ve got an AND gate on there that you want to validate. So you define a property for the AND gate. If the validation tool has to explore the entire state of the chip to verify that the AND gate works, you’re gonna be there a while.

Ways of reducing the design space for a given assertion have been developed and have different names, including “abstraction” and “tunneling.” Because they help reduce the computation time dramatically, they’ve become something of a marketing tool, and various companies will disagree as to how important they are for differentiating products. That said, the goal is to make sure that only relevant portions of the design space are explored – the so-called “cone of influence.” Which is more or less equivalent to, and sometimes the same as, the “logic cone” that some of you may be familiar with. It’s basically that cone of circuitry originating at the inputs in the assertion and propagating to the outputs. Cutting everything else out of the equation dramatically reduces the amount of computing required.

Of course, if someone has a room full of stuff and you give them a way to cram the stuff into a smaller space, they won’t move to a smaller room; they’ll just find more stuff to put in the now freed-up space in the room. Which means that faster computation doesn’t necessarily mean you’ll be finished faster – what it really means is that you can handle larger, more complex circuits and models. And circuit size has been a significant limitation to the usability of this kind of technology. So this has become a point of marketing differentiation – how many lines of code can be handled.

But have I thought of everything?

There’s one huge gotcha when it comes to property checking as a solution to all your problems – the quality of your verification will only be as good as the quality of the model or the properties you specify. Most checkers will blithely process the properties you call out, but if you do a crappy job specifying the design intent, then the tool will do a crappy job of pointing out that you did a crappy job of implementing the design. But it’s not always easy to figure out whether or not you’ve covered all corners of your design. And, in fact, the corners are usually where you’re going to get tripped up – they’re the same situations that you forgot to provide vectors for, which is why you moved to formal in the first place.

So there may be a “gap” between the model you’ve built and the “ideal” or “complete” model. And this gap can occur in two ways. Let’s think of it this way: each property is essentially a scenario. You define a scenario, and then you describe what’s supposed to happen in that scenario. So the first gap is where you forgot to describe some details of the outcome of a scenario. The second gap is where there are scenarios that you forgot entirely. Looked at another way, the first gap is caused by missing output specifications; the second gap reflects missing input specifications. In either case, filling that gap is important for ensuring that you’ve got complete coverage. That’s generally been left to you to figure out, but, as we’ll see, there’s help showing up there as well.

While handy simply for ensuring complete models, this can be particularly useful for validation of IP delivered by a third party. Any reputable IP company will presumably validate its IP before delivering it, but if you are designing a system – particularly one that might be in a safety-related or military application – you want to make sure that the IP not only performs all of its functions correctly, but that it does nothing more than it’s supposed to do. Not that anyone would ever add superfluous code; worrying about something ridiculous like, oh, say, a flight simulator buried inside a spreadsheet program is the height of paranoia, but hey, some people are paid to worry about such things.

So, while implementations are validated by looking at the properties and testing the circuit, gaps are identified essentially by looking at the circuit and testing the properties. Let’s say you receive some IP from someone else into which was inadvertently inserted some malicious code. Presumably that code won’t be reflected in the model – straight verification may not reveal anything wrong, because the portion of the circuit implementing the model works. But gap identification will point out that there are some properties missing – specifically, the scenarios that engage the malicious code.

OK, so maybe it wasn’t malicious. But at the very least, you can improve the coverage of the model so that it is complete, and so that you end up with a one-to-one correspondence between your model and your implementation. They are then isomorphic. That model can be used as a golden reference for further validation down the line.

So who’s doing this?

Formal solutions are actually widely available; the Big 3, Cadence, Mentor, and Synopsys all have formal and ABV tools and flows. But small guys are entering the fray as well, relying on different characteristics for differentiation.
Jasper has placed focus on their particular approach to design tunneling, claiming patented superior heuristics. They also have emphasized what they call their “visualization flow,” which is the interactive mode of use for identifying bugs and exploring various scenarios. They have waveform inputs and outputs that assist in making what can otherwise appear rather obscure look more straightforward.

OneSpin has focused on the fact that they’re the only ones with the ability to identify the property gaps; they’ll validate your circuit, but they will also validate the model, providing what they call “gap-free” verification. They also reduce the design space and handle waveform inputs.

Another company, RealIntent, has a basic ABV solution, but they’ve also decided to use their technology for some more specific tools. They have a tool to validate clock boundary crossings, injecting metastability to ensure that it isn’t propagated. They have another tool for validating false-path and multi-cycle-path constraints. And they’ve also got something that appears to be a LINT tool on steroids. It’s a bug checker, but it uses formal technology to look for certain pre-defined desirable or undesirable properties, such as dead code, unreachable states, and bus contention.
Bottom line is that there are a number of formal solutions that are now being actively promoted, with dedicated engineering teams frantically trying to compete with each other. That’s good news, since this technology can ride that wave of enthusiasm to become more and more accessible and usable in SoC validation.

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Electromagnetic Compatibility (EMC) Gasket Design Considerations
Electromagnetic interference can cause a variety of costly issues and can be avoided with a robust EMI shielding solution. In this episode of Chalk Talk, Amelia Dalton chats with Sam Robinson from TE Connectivity about the role that EMC gaskets play in EMI shielding, how compression can affect EMI shielding, and how TE Connectivity can help you solve your EMI shielding needs in your next design.
Aug 30, 2023
28,950 views