feature article
Subscribe Now

Expanding the Scope of Verification

Transcending Functional Correctness

Looking at the agenda for the 2017 edition of the annual DVCon – arguably the industry’s premiere verification conference, one sees precisely what one would expect: tutorials, keynotes, and technical sessions focused on the latest trends and techniques in the ever-sobering challenge of functional verification in the face of the relentless advance of Moore’s Law.

For five decades now, our designs have approximately doubled in complexity every two years. Our brains, however, have not. Our human engineering noggins can still process just about the same amount of stuff that we could back when we left college, assuming we haven’t let ourselves get too stale. That means that the gap between what we as engineers can understand and what we can design has been growing at an exponential rate for over fifty years. This gap has always presented the primary challenge for verification engineers and verification technology. Thirty years ago, we needed to verify that a few thousand transistors were toggling the right ways at the right times. Today, that number is in the billions. In order to accomplish that and span the complexity gap, we need significant leverage.

The basic fundamentals of verification have persisted. Logic simulation has always been a mainstay, processing vectors of stimuli and expected results as fast and accurately as possible – showing us where our logic or timing has gone awry. Along the way, we started to pick up formal methods – giving us a way to “prove” that our functionality was correct, rather than trying to exhaustively simulate the important or likely scenarios. Parallel to those two avenues of advancement, we have been constantly struggling to optimize and accelerate the verification process. We’ve proceduralized verification through standards-based approaches like UVM, and we’ve worked to accelerate the execution of our verification processes through technologies such as FPGA-based prototyping and emulation. 

Taking advantage of Moore’s Law performance gains in order to accelerate the verification of our designs as they grow in complexity according to Moore’s Law is, as today’s kids would probably say, “Kinda’ meta.” But Moore’s Law alone is not enough to keep up with Moore’s Law. It’s the classic perpetual-motion conundrum. There are losses in the system that prevent the process from being perfectly self-sustaining. Each technology-driven doubling of the complexity of our designs does not yield a doubling of the computation that can be achieved. We gradually accrue a deficit. 

And the task of verification is constantly expanding in other dimensions as well. At first, it was enough to simply verify that our logic was correct – that the 1s, 0s, and Xs at the inputs would all propagate down to the correct results at the outputs. On top of that, we had to worry about timing and temporal effects on our logic. As time passed, it became important to verify that embedded software would function correctly on our new hardware, and that opened up an entire new world of verification complexity. Then, people got cranky about manufacturing variation and how that would impact our verification results. And we started to get curious about how things like temperature, radiation, and other environmental effects would call our verification results into question. 

Today, our IoT applications span vast interconnected systems from edge devices with sensors and local compute resources through complex communication networks to cloud-based computing and storage centers and back again. We need to verify not just the function of individual components in that chain, but of the application as a whole. We need to confirm not simply that the application will function as intended – from both a hardware and software perspective – but that it is secure, robust, fault-tolerant, and stable. We need to assure that performance – throughput and latency – are within acceptable limits, and that power consumption is minimized. This problem far exceeds the scope of the current notion of “verification” in our industry.

Our definition of “correct” behavior is growing increasingly fuzzy over time as well. For example, determining whether a processed video stream “looks good” is almost impossible from a programmatic perspective. The only reliable metric we have is human eyes subjectively staring at a screen. There are many more metrics for system success that have followed similar subjectivity issues. As our digital applications interact more and more directly and intimately with our human, emotional, analog world, our ability to boil verification down to a known set of zeros and ones slips ever farther from our grasp.

The increasing dominance of big data and AI-based algorithms further complicate the real-world verification picture. When the behavior of both hardware and software is too complex to model, it is far too complex to completely verify. Until some radical breakthrough occurs in the science of verification itself, we will have to be content to verify components and subsystems along fairly narrow axes and hope that confirming the quality of the flour, sugar, eggs, butter, and baking soda somehow verifies the deliciousness of the cookie. 

There is no question that Moore’s Law is slowly grinding to a halt. And, while that halt may give us a chance to grab a breath from the Moore’s Law verification treadmill, it will by no means bring an end to our verification challenges. The fact is – if Moore’s Law ends today, we can already build systems far too complex to verify. If your career is in verification, and you are competent, your job security future looks pretty rosy.

But this may highlight a fundamental issue with our whole notion of verification. Verification somewhat tacitly assumes a waterfall development model. It presupposes that we design a new thing, then we verify our design, then we make and deploy the thing that we developed and verified. However, software development (and I’d argue that the development of all complex hardware/software applications such as those currently being created for IoT) follows something much more akin to agile development – where verification is a continual ongoing process as the applications and systems evolve over time after their initial deployment.

So, let’s challenge our notion of the scope and purpose of verification. Let’s think about how verification serves our customers and our business interests. Let’s re-evaluate our metrics for success. Let’s consider how the development and deployment of products and services has changed the role of verification. Let’s think about how our technological systems have begun to invert – where applications now span large numbers of diverse systems, rather than being contained within one. Moore’s Law may end, but our real work in verification has just begun.

Leave a Reply

featured blogs
Jul 20, 2018
https://youtu.be/KwrfcMtbMDM Coming from CDNLive Japan (camera Asushi Tanaka) Monday: Nicolas's Recipe for Digital Marketing in EDA Tuesday: Embargoed announcement Wednesday: Trends, Technologies, and Regulation in China's Auto Market Thursday: Breakfast Bytes Guide...
Jul 19, 2018
In the footer of Samtec.com, we'€™ve always made it easy to contact us by phone, email, or live chat (even fax back in the day!). To continue to progress this theme, you'€™ll now find a new helpful tool in the footer area of Samtec.com. This tool will match you up with yo...
Jul 16, 2018
Each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store the eFPGA'€™s configuration bits. Each Speedcore instance contains its own FPGA configu...
Jul 12, 2018
A single failure of a machine due to heat can bring down an entire assembly line to halt. At the printed circuit board level, we designers need to provide the most robust solutions to keep the wheels...