feature article
Subscribe Now

The Very Model Of Reusability

Silos can be wonderful things when used properly. They keep your grain dry when it rains. They provide a handy storage. They can look lovely, providing the only real topography in an otherwise 2-D landscape. And they evoke the heart of America (well, to Americans anyway… no intent to disenfranchise the rest of the world).

Unfortunately, silos aren’t restricted to the picturesque plains of the Midwest. They exist amongst us, around us. Many of us work in silos. We are indeed kept out of the rain, and sometimes it feels like we’re in storage; we might even be picturesque. But we’re also kept from being as productive as we could be.

The progression of a system design from architect to implementation to system test tends to involve the bare minimum of interaction between silos. The architect works in one silo, ships out a specification to a designer that will implement the design, who ships the design and some specs out to a test engineer tasked with ensuring that systems shipped work correctly. Electronic System Level (ESL) design techniques are intended to break these silos and allow more of the work done up front to be incorporated into the final product, with less rework each step of the way. But ESL has been slow to catch on, and part of the reason may be that there are some sizeable gaps remaining in the design flow that severely mitigate the benefits of investing in ESL tools.

The typical system design process starts with an architect and/or system designer scoping out the high-level behaviors that the system should exhibit. This is done typically in an abstract fashion using languages like C and C++ or systems like Matlab. The designer will put together some kind of model and testbench to check out the specification, defining specific directed tests to prove that his or her ideas work. The idea is to explore all the basic desired areas of operation of the function. Increasingly, transaction-level modeling (TLM) can be used at a high level to speed up simulation.

Once that’s done, the design gets handed off to the dude that’s going to implement the design, typically using RTL (since we’re talking about digital stuff). This designer will also have to create models and a testbench, this time using RTL. And in addition to the directed tests written to prove that the key functions work properly, additional tests are also created to explore the corners of the design space, those bizarre combinations that no one expects but that could unleash the hidden darker side of the design.

Exhaustive test vectors can be generated using different techniques, but since “random” generation for complex designs can generate too many useless vectors, “constrained random” is more common – but this will still get you millions of vectors. Test coverage analysis then will typically inform the designer that there are various hard-to-reach places that didn’t get explored. The more complicated the design, the harder some areas are to reach. So more vectors are generated either manually or with manual direction to ease coverage up to an acceptable level.

There are a couple of problems here. First, even though the system designer went through the effort to create models and testbenches, they only work at a high level and can’t talk to lower-level constructs. So the designer doing the implementation is writing a new set of models and testbenches. To be really thorough, that work must be validated against the higher-level work to make sure that they’re really equivalent models and tests. The second problem is that different sets of tests are being created for different environments, and the methodology for creating those tests may not converge to high coverage as fast as you might like, essentially because random test generators more or less spray the logic space with values to get results, which isn’t the most efficient way to test (even if it is straightforward).

This illustrates the kind of gap that could be suppressing enthusiasm for ESL methodologies, since there are these disconnects between high- and low-level design phases that really diminish the benefit of the up-front high-level abstraction. Mentor Graphics has introduced a couple products that they believe will take a good step towards closing that gap.

One of those products is what they call their Multi-View components. These are models that can operate with different levels of abstraction at the interfaces. So in one test environment, they can interact using C or C++; in another environment, they can be made to use SystemVerilog or RTL. And a testbench written at for high-level validation can also be used for lower-level validation, since the model can interface with high-level models on one side and with low-level models on the other, doing a rather respectable Janus impression. The intent is to avoid having to create multiple models, all of which do the same thing, just for different levels of abstraction: instead, a single model can work in all of the environments.

While Multi-View components are sold pre-designed by Mentor, they also have an authoring kit that can be used to create models. But this system uses a language called C4 that they acquired through Spirotech, and until they can get more of a standard going with this, they’re not really emphasizing the ability to author your own models. And in setting their standards priorities, they’re focused more on model interoperability – which could be between one and two years away – before trying to get the C4 capabilities worked into a standard. So while you can technically do this now, it looks like it will be a while before they push it into the headlines.

The other product they’re announcing is their inFact testbench automation. Using this system, you provide a concise specification of the test goals along with a model of the design being tested, and that is synthesized into test sequences using any of a number of testing techniques. The tool uses the response of the design to track coverage and to direct the test generation intelligently. The intent is to be able to get better coverage with fewer vectors, as well as being able to take a single test specification and use it to generate different vectors for different purposes.
Combining inFact with Multi-view components can allow a single test environment generation effort to provide a result that can be used to check out both the front-end architectural design and the implementation at the RTL and gate levels. The implementation engineer doesn’t have to redo the work done by the system guy, although he or she can build upon that work. The ability to pass models from the architect to the designer and verify up and down the line with a consistent set of models and tests should not only speed things up but also build confidence, since models and testbenches aren’t constantly being redone with the potential of new errors each time.

Mentor claims that the productivity gains can be as high as 10x. They’ve worked through a number of examples that compare the typical asymptotic increase in coverage as tests are generated with a steep linear increase using inFact. This means far fewer vectors, and fewer vectors means both less simulation time and, assuming these tests become part of the actual production test suite, less test time. Faster simulation, of course, gets product to market faster, and less time on the tester equates directly to lower testing costs. Hopefully those are goals that all of the silos can agree on.

Leave a Reply

featured blogs
Apr 19, 2024
Data type conversion is a crucial aspect of programming that helps you handle data across different data types seamlessly. The SKILL language supports several data types, including integer and floating-point numbers, character strings, arrays, and a highly flexible linked lis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Non-Magnetic Interconnects
Sponsored by Mouser Electronics and Samtec
Magnets and magnetic fields can cause big problems in medical, scientific, industrial, space, and quantum computing applications but using a non-magnetic connector can help solve these issues. In this episode of Chalk Talk, Amelia Dalton and John Riley from Samtec discuss the construction of non-magnetic connectors and how you could use non-magnetic connectors in your next design.
May 3, 2023
39,708 views