feature article
Subscribe Now

Expanding the Scope of Verification

Transcending Functional Correctness

Looking at the agenda for the 2017 edition of the annual DVCon – arguably the industry’s premiere verification conference, one sees precisely what one would expect: tutorials, keynotes, and technical sessions focused on the latest trends and techniques in the ever-sobering challenge of functional verification in the face of the relentless advance of Moore’s Law.

For five decades now, our designs have approximately doubled in complexity every two years. Our brains, however, have not. Our human engineering noggins can still process just about the same amount of stuff that we could back when we left college, assuming we haven’t let ourselves get too stale. That means that the gap between what we as engineers can understand and what we can design has been growing at an exponential rate for over fifty years. This gap has always presented the primary challenge for verification engineers and verification technology. Thirty years ago, we needed to verify that a few thousand transistors were toggling the right ways at the right times. Today, that number is in the billions. In order to accomplish that and span the complexity gap, we need significant leverage.

The basic fundamentals of verification have persisted. Logic simulation has always been a mainstay, processing vectors of stimuli and expected results as fast and accurately as possible – showing us where our logic or timing has gone awry. Along the way, we started to pick up formal methods – giving us a way to “prove” that our functionality was correct, rather than trying to exhaustively simulate the important or likely scenarios. Parallel to those two avenues of advancement, we have been constantly struggling to optimize and accelerate the verification process. We’ve proceduralized verification through standards-based approaches like UVM, and we’ve worked to accelerate the execution of our verification processes through technologies such as FPGA-based prototyping and emulation. 

Taking advantage of Moore’s Law performance gains in order to accelerate the verification of our designs as they grow in complexity according to Moore’s Law is, as today’s kids would probably say, “Kinda’ meta.” But Moore’s Law alone is not enough to keep up with Moore’s Law. It’s the classic perpetual-motion conundrum. There are losses in the system that prevent the process from being perfectly self-sustaining. Each technology-driven doubling of the complexity of our designs does not yield a doubling of the computation that can be achieved. We gradually accrue a deficit. 

And the task of verification is constantly expanding in other dimensions as well. At first, it was enough to simply verify that our logic was correct – that the 1s, 0s, and Xs at the inputs would all propagate down to the correct results at the outputs. On top of that, we had to worry about timing and temporal effects on our logic. As time passed, it became important to verify that embedded software would function correctly on our new hardware, and that opened up an entire new world of verification complexity. Then, people got cranky about manufacturing variation and how that would impact our verification results. And we started to get curious about how things like temperature, radiation, and other environmental effects would call our verification results into question. 

Today, our IoT applications span vast interconnected systems from edge devices with sensors and local compute resources through complex communication networks to cloud-based computing and storage centers and back again. We need to verify not just the function of individual components in that chain, but of the application as a whole. We need to confirm not simply that the application will function as intended – from both a hardware and software perspective – but that it is secure, robust, fault-tolerant, and stable. We need to assure that performance – throughput and latency – are within acceptable limits, and that power consumption is minimized. This problem far exceeds the scope of the current notion of “verification” in our industry.

Our definition of “correct” behavior is growing increasingly fuzzy over time as well. For example, determining whether a processed video stream “looks good” is almost impossible from a programmatic perspective. The only reliable metric we have is human eyes subjectively staring at a screen. There are many more metrics for system success that have followed similar subjectivity issues. As our digital applications interact more and more directly and intimately with our human, emotional, analog world, our ability to boil verification down to a known set of zeros and ones slips ever farther from our grasp.

The increasing dominance of big data and AI-based algorithms further complicate the real-world verification picture. When the behavior of both hardware and software is too complex to model, it is far too complex to completely verify. Until some radical breakthrough occurs in the science of verification itself, we will have to be content to verify components and subsystems along fairly narrow axes and hope that confirming the quality of the flour, sugar, eggs, butter, and baking soda somehow verifies the deliciousness of the cookie. 

There is no question that Moore’s Law is slowly grinding to a halt. And, while that halt may give us a chance to grab a breath from the Moore’s Law verification treadmill, it will by no means bring an end to our verification challenges. The fact is – if Moore’s Law ends today, we can already build systems far too complex to verify. If your career is in verification, and you are competent, your job security future looks pretty rosy.

But this may highlight a fundamental issue with our whole notion of verification. Verification somewhat tacitly assumes a waterfall development model. It presupposes that we design a new thing, then we verify our design, then we make and deploy the thing that we developed and verified. However, software development (and I’d argue that the development of all complex hardware/software applications such as those currently being created for IoT) follows something much more akin to agile development – where verification is a continual ongoing process as the applications and systems evolve over time after their initial deployment.

So, let’s challenge our notion of the scope and purpose of verification. Let’s think about how verification serves our customers and our business interests. Let’s re-evaluate our metrics for success. Let’s consider how the development and deployment of products and services has changed the role of verification. Let’s think about how our technological systems have begun to invert – where applications now span large numbers of diverse systems, rather than being contained within one. Moore’s Law may end, but our real work in verification has just begun.

Leave a Reply

featured blogs
Oct 19, 2020
Have you ever wondered if there may another world hidden behind the facade of the one we know and love? If so, would you like to go there for a visit?...
Oct 19, 2020
Sometimes, you attend an event and it feels like you are present at the start of a new era that will change some aspect of the technology industry. Of course, things don't change overnight. One... [[ Click on the title to access the full blog on the Cadence Community si...
Oct 16, 2020
Another event popular in the tech event circuit is PCI-SIG® DevCon. While DevCon events are usually in-person around the globe, this year, like so many others events, PCI-SIG DevCon is going virtual. PCI-SIG DevCons are members-driven events that provide an opportunity to le...
Oct 16, 2020
[From the last episode: We put together many of the ideas we'€™ve been describing to show the basics of how in-memory compute works.] I'€™m going to take a sec for some commentary before we continue with the last few steps of in-memory compute. The whole point of this web...

featured video

Demo: Inuitive NU4000 SoC with ARC EV Processor Running SLAM and CNN

Sponsored by Synopsys

Autonomous vehicles, robotics, augmented and virtual reality all require simultaneous localization and mapping (SLAM) to build a map of the surroundings. Combining SLAM with a neural network engine adds intelligence, allowing the system to identify objects and make decisions. In this demo, Synopsys ARC EV processor’s vision engine (VPU) accelerates KudanSLAM algorithms by up to 40% while running object detection on its CNN engine.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

Featured Paper

The Cryptography Handbook

Sponsored by Maxim Integrated

The Cryptography Handbook is designed to be a quick study guide for a product development engineer, taking an engineering rather than theoretical approach. In this series, we start with a general overview and then define the characteristics of a secure cryptographic system. We then describe various cryptographic concepts and provide an implementation-centric explanation of physically unclonable function (PUF) technology. We hope that this approach will give the busy engineer a quick understanding of the basic concepts of cryptography and provide a relatively fast way to integrate security in his/her design.

Click here to download the whitepaper

Featured Chalk Talk

Digi Remote Manager

Sponsored by Mouser Electronics and Digi

With the complexity of today’s networks, the proliferation of IoT, and the increase in remote access requirements, remote management is going from “nice to have” to “critical” in network design and deployment. In this episode of Chalk Talk, Amelia Dalton chats with Stefan Budricks of Digi International about how Digi Remote Manager can address your remote management and security needs.

Click here for more information about DIGI XBee® Tools