feature article
Subscribe Now

Does the World Need a New Verification Technology?

Verification is one of those perennial problems. I long ago lost count of the white papers I have read and the presentations I have sat through that discuss the issues of testing (roughly translated as “does this work?”) and verification (roughly translated as “does this do what the system architect or whomever intended?”). Accompanying this are always graphs showing the increasing percentage of the project lifecycle that is taken up by test and verification. And the same arguments are rolled out for chip-, software- and system-development.

Now a new British company, Coveritas, has come up with another approach to the problem of verifying embedded systems and software. In fact, to say that Coveritas is new is not strictly true; the company has been around for over three years. While in stealth mode, it shipped product to some big names, including some working on low-power wireless and the internet of things. The team is small, and the three senior guys (Sean Redmond, Giles Hall and Paul Gee) each have a quarter century or more experience in electronics. They have bootstrapped the company financially, and only now does the website reveal anything about what they are doing.

What they have done is to take an approach called spec-based verification, an approach in use in the chip-design industry for some time, and transfer it to software. The first use of the term spec-based verification that I can find was by Verisity nearly fifteen years ago, so it is no surprise that both Redmond and Hall spent time with Verisity, both before and after the acquisition by Cadence.

Spec-based verification has several useful features. It takes the verification activity to a higher level of abstraction: the test team should not be getting involved in the nitty-gritty of code when the code is numbered in millions of lines, nor should the tests themselves require writing many hundreds or thousands of lines of code or scripts. It tests against the specifications – functionality, functional test plan, interface specification, and design specification – capturing the rules for the system that are embedded in these specifications and then generating tests. If the specification is changed, then the tests can evolve to meet the changes. And the tests are available for later use in other, similar projects.

How does this differ from what most people are doing today? (And in this context, when we talk about most people, we are talking about those operating in a controlled development environment. Not Harry the Hairy Hacker.)

Today, the standard method is to manually create a test environment, developing tests for scenarios that the test team can imagine. This is time-consuming, inflexible (a change in the specification may require hours of work in changing the tests), doesn’t cope with complex issues like corner cases, and still lets bugs through to the end product. The time taken to develop and run the tests can have an impact on the overall project, particularly on the integration between hardware and software.

An attempt to get around the problem of testing only what the testers can think of testing is fuzzing — the automatic test generation of random inputs within certain defined limits. Redmond compares this to testing by rolling an orange across the keyboard, and he explains that, since the randomisation is within defined limits, fuzzing is still testing only what the tester thinks needs testing.

Coveritas’s approach, in the Xcetera tool suite, starts with writing a functional test plan as executable C++ case definitions and system scenarios. This is based on the system specification, and, when testing is carried out, the functional coverage analyser can display, in the same way as the specification defines functionality, what functionality has and has not been tested. In a standards-based project, for example in developing stacks for communication using standard protocols, an expert in the protocol can quickly review the coverage and recommend further work.

From this starting point, the next step is the use of constraint-driven automatic test generation. Coveritas considers two types: specification constraints and test constraints.

Specification constraints are about the legal parameters, such as input definitions, protocol definitions, I.O., relationships, and dependencies. These are the boundaries within which the system has to operate and therefore provide the boundaries for the test generator.

Test constraints, such as use case scenarios, target areas, probabilities and weights, corner case tests, and bug bypasses, are drawn from the functional test plan, and they give the test generator the information needed for specific tests.

The generator then creates random tests except for those cases where the constraints specifically tell it that it shouldn’t — exactly the opposite to fuzzing. This approach, claims Coveritas, can find bugs that were not considered in the functional test plan, such as ambiguities in the original specification or misinterpretation of the specification. It also can pick up behaviour that the specifier, the developer, or the test case writer had not considered.

The generator creates the tests on the fly, and it can examine software running within a software model, a virtual prototype, an FPGA prototype, or even the production system. As the testing gets closer to the actual system, so the complex issues that develop only when the code is executed on real hardware, such as some classes of corner cases, will be created, and the test generator can reveal them.

The Xcetera software tool suite is now available – visit www.coveritas.com.

As I was writing this, I discovered that Coveritas had been selected as one of the best British technology start-ups and was present at the British Business Embassy, a government venture that is running alongside the Olympic Games in London. The idea is that among the many high-level visitors to the Olympics are potential customers for British companies.  Certainly Coveritas found it useful, but more relevant to this piece is that they persuaded one of their customers to go on video for the event and to publicly endorse them. NXP has been developing a low-power wireless solution for the internet of things, called JenNet. (The underlying technology was acquired when NXP bought Jennic, a UK based fabless semiconductor company, two years ago.) Coveritas was used widely in the development of the JenNet-IP software, and NXP says on the video that the environment enabled the “engineers to find difficult-to-detect bugs” and “reduced the risk time and cost of development.”

So that’s a good start, then.

But, to return to a theme that is a regular concern of mine, the Coveritas approach can work only within a well-defined development environment, essential to creating an appropriate specification. This also means it is likely that static code analysis and code reviews will have removed many of the minor bugs, leaving the heavyweight solution to cope with the complex and hard-to-find issues for which it has been designed. Having said this, there is ample evidence that, even with the best of environments and using the best developers, the complexity of the demands made upon modern software is such that a tool like Xcetera is essential if the end product is going to be affordable and delivered on time and will actually work well in service. And, while the approach that Coveritas is taking is new to software, it draws on a heritage of proven solutions. 

One thought on “Does the World Need a New Verification Technology?”

Leave a Reply

featured blogs
Apr 23, 2024
Do you think you are spending too much time fine-tuning your SKILL code? As a SKILL coder, you must be aware that producing bug-free and efficient code requires a lot of effort and analysis. But don't worry, there's good news! The Cadence Virtuoso Studio platform ha...
Apr 22, 2024
Learn what gate-all-around (GAA) transistors are, explore the switch from fin field-effect transistors (FinFETs), and see the impact on SoC design & EDA tools.The post What You Need to Know About Gate-All-Around Designs appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Advancements in Motor Efficiency Enables More Sustainable Manufacturing
Climate change is encouraging the acceleration of sustainable and renewable manufacturing processes and practices and one way we can encourage sustainability in manufacturing is with the use of variable speed drive motor control. In this episode of Chalk Talk, Amelia Dalton chats with Maurizio Gavardoni and Naveen Dhull from Analog Devices about the wide ranging benefits of variable speed motors, the role that current feedback plays in variable speed motor control, and how precision measurement solutions for current feedback can lead to higher motor efficiency, energy saving and enhanced sustainability.
Oct 19, 2023
23,863 views