feature article
Subscribe Now

Does the World Need a New Verification Technology?

Verification is one of those perennial problems. I long ago lost count of the white papers I have read and the presentations I have sat through that discuss the issues of testing (roughly translated as “does this work?”) and verification (roughly translated as “does this do what the system architect or whomever intended?”). Accompanying this are always graphs showing the increasing percentage of the project lifecycle that is taken up by test and verification. And the same arguments are rolled out for chip-, software- and system-development.

Now a new British company, Coveritas, has come up with another approach to the problem of verifying embedded systems and software. In fact, to say that Coveritas is new is not strictly true; the company has been around for over three years. While in stealth mode, it shipped product to some big names, including some working on low-power wireless and the internet of things. The team is small, and the three senior guys (Sean Redmond, Giles Hall and Paul Gee) each have a quarter century or more experience in electronics. They have bootstrapped the company financially, and only now does the website reveal anything about what they are doing.

What they have done is to take an approach called spec-based verification, an approach in use in the chip-design industry for some time, and transfer it to software. The first use of the term spec-based verification that I can find was by Verisity nearly fifteen years ago, so it is no surprise that both Redmond and Hall spent time with Verisity, both before and after the acquisition by Cadence.

Spec-based verification has several useful features. It takes the verification activity to a higher level of abstraction: the test team should not be getting involved in the nitty-gritty of code when the code is numbered in millions of lines, nor should the tests themselves require writing many hundreds or thousands of lines of code or scripts. It tests against the specifications – functionality, functional test plan, interface specification, and design specification – capturing the rules for the system that are embedded in these specifications and then generating tests. If the specification is changed, then the tests can evolve to meet the changes. And the tests are available for later use in other, similar projects.

How does this differ from what most people are doing today? (And in this context, when we talk about most people, we are talking about those operating in a controlled development environment. Not Harry the Hairy Hacker.)

Today, the standard method is to manually create a test environment, developing tests for scenarios that the test team can imagine. This is time-consuming, inflexible (a change in the specification may require hours of work in changing the tests), doesn’t cope with complex issues like corner cases, and still lets bugs through to the end product. The time taken to develop and run the tests can have an impact on the overall project, particularly on the integration between hardware and software.

An attempt to get around the problem of testing only what the testers can think of testing is fuzzing — the automatic test generation of random inputs within certain defined limits. Redmond compares this to testing by rolling an orange across the keyboard, and he explains that, since the randomisation is within defined limits, fuzzing is still testing only what the tester thinks needs testing.

Coveritas’s approach, in the Xcetera tool suite, starts with writing a functional test plan as executable C++ case definitions and system scenarios. This is based on the system specification, and, when testing is carried out, the functional coverage analyser can display, in the same way as the specification defines functionality, what functionality has and has not been tested. In a standards-based project, for example in developing stacks for communication using standard protocols, an expert in the protocol can quickly review the coverage and recommend further work.

From this starting point, the next step is the use of constraint-driven automatic test generation. Coveritas considers two types: specification constraints and test constraints.

Specification constraints are about the legal parameters, such as input definitions, protocol definitions, I.O., relationships, and dependencies. These are the boundaries within which the system has to operate and therefore provide the boundaries for the test generator.

Test constraints, such as use case scenarios, target areas, probabilities and weights, corner case tests, and bug bypasses, are drawn from the functional test plan, and they give the test generator the information needed for specific tests.

The generator then creates random tests except for those cases where the constraints specifically tell it that it shouldn’t — exactly the opposite to fuzzing. This approach, claims Coveritas, can find bugs that were not considered in the functional test plan, such as ambiguities in the original specification or misinterpretation of the specification. It also can pick up behaviour that the specifier, the developer, or the test case writer had not considered.

The generator creates the tests on the fly, and it can examine software running within a software model, a virtual prototype, an FPGA prototype, or even the production system. As the testing gets closer to the actual system, so the complex issues that develop only when the code is executed on real hardware, such as some classes of corner cases, will be created, and the test generator can reveal them.

The Xcetera software tool suite is now available – visit www.coveritas.com.

As I was writing this, I discovered that Coveritas had been selected as one of the best British technology start-ups and was present at the British Business Embassy, a government venture that is running alongside the Olympic Games in London. The idea is that among the many high-level visitors to the Olympics are potential customers for British companies.  Certainly Coveritas found it useful, but more relevant to this piece is that they persuaded one of their customers to go on video for the event and to publicly endorse them. NXP has been developing a low-power wireless solution for the internet of things, called JenNet. (The underlying technology was acquired when NXP bought Jennic, a UK based fabless semiconductor company, two years ago.) Coveritas was used widely in the development of the JenNet-IP software, and NXP says on the video that the environment enabled the “engineers to find difficult-to-detect bugs” and “reduced the risk time and cost of development.”

So that’s a good start, then.

But, to return to a theme that is a regular concern of mine, the Coveritas approach can work only within a well-defined development environment, essential to creating an appropriate specification. This also means it is likely that static code analysis and code reviews will have removed many of the minor bugs, leaving the heavyweight solution to cope with the complex and hard-to-find issues for which it has been designed. Having said this, there is ample evidence that, even with the best of environments and using the best developers, the complexity of the demands made upon modern software is such that a tool like Xcetera is essential if the end product is going to be affordable and delivered on time and will actually work well in service. And, while the approach that Coveritas is taking is new to software, it draws on a heritage of proven solutions. 

One thought on “Does the World Need a New Verification Technology?”

Leave a Reply

featured blogs
Dec 4, 2023
The OrCAD X and Allegro X 23.1 release comes with a brand-new content delivery application called Cadence Doc Assistant, shortened to Doc Assistant, the next-gen app for content searching, navigation, and presentation. Doc Assistant, with its simplified content classification...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at intel.com/leap. The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Stepper Motor Basics & Toshiba Motor Control Solutions
Sponsored by Mouser Electronics and Toshiba
Stepper motors offer a variety of benefits that can add value to many different kinds of electronic designs. In this episode of Chalk Talk, Amelia Dalton and Doug Day from Toshiba examine the different types of stepper motors, the solutions to drive these motors, and how the active gain control and ADMD of Toshiba’s motor control solutions can make all the difference in your next design.
Sep 29, 2023
7,809 views