feature article
Subscribe Now

The Very Model Of Reusability

Silos can be wonderful things when used properly. They keep your grain dry when it rains. They provide a handy storage. They can look lovely, providing the only real topography in an otherwise 2-D landscape. And they evoke the heart of America (well, to Americans anyway… no intent to disenfranchise the rest of the world).

Unfortunately, silos aren’t restricted to the picturesque plains of the Midwest. They exist amongst us, around us. Many of us work in silos. We are indeed kept out of the rain, and sometimes it feels like we’re in storage; we might even be picturesque. But we’re also kept from being as productive as we could be.

The progression of a system design from architect to implementation to system test tends to involve the bare minimum of interaction between silos. The architect works in one silo, ships out a specification to a designer that will implement the design, who ships the design and some specs out to a test engineer tasked with ensuring that systems shipped work correctly. Electronic System Level (ESL) design techniques are intended to break these silos and allow more of the work done up front to be incorporated into the final product, with less rework each step of the way. But ESL has been slow to catch on, and part of the reason may be that there are some sizeable gaps remaining in the design flow that severely mitigate the benefits of investing in ESL tools.

The typical system design process starts with an architect and/or system designer scoping out the high-level behaviors that the system should exhibit. This is done typically in an abstract fashion using languages like C and C++ or systems like Matlab. The designer will put together some kind of model and testbench to check out the specification, defining specific directed tests to prove that his or her ideas work. The idea is to explore all the basic desired areas of operation of the function. Increasingly, transaction-level modeling (TLM) can be used at a high level to speed up simulation.

Once that’s done, the design gets handed off to the dude that’s going to implement the design, typically using RTL (since we’re talking about digital stuff). This designer will also have to create models and a testbench, this time using RTL. And in addition to the directed tests written to prove that the key functions work properly, additional tests are also created to explore the corners of the design space, those bizarre combinations that no one expects but that could unleash the hidden darker side of the design.

Exhaustive test vectors can be generated using different techniques, but since “random” generation for complex designs can generate too many useless vectors, “constrained random” is more common – but this will still get you millions of vectors. Test coverage analysis then will typically inform the designer that there are various hard-to-reach places that didn’t get explored. The more complicated the design, the harder some areas are to reach. So more vectors are generated either manually or with manual direction to ease coverage up to an acceptable level.

There are a couple of problems here. First, even though the system designer went through the effort to create models and testbenches, they only work at a high level and can’t talk to lower-level constructs. So the designer doing the implementation is writing a new set of models and testbenches. To be really thorough, that work must be validated against the higher-level work to make sure that they’re really equivalent models and tests. The second problem is that different sets of tests are being created for different environments, and the methodology for creating those tests may not converge to high coverage as fast as you might like, essentially because random test generators more or less spray the logic space with values to get results, which isn’t the most efficient way to test (even if it is straightforward).

This illustrates the kind of gap that could be suppressing enthusiasm for ESL methodologies, since there are these disconnects between high- and low-level design phases that really diminish the benefit of the up-front high-level abstraction. Mentor Graphics has introduced a couple products that they believe will take a good step towards closing that gap.

One of those products is what they call their Multi-View components. These are models that can operate with different levels of abstraction at the interfaces. So in one test environment, they can interact using C or C++; in another environment, they can be made to use SystemVerilog or RTL. And a testbench written at for high-level validation can also be used for lower-level validation, since the model can interface with high-level models on one side and with low-level models on the other, doing a rather respectable Janus impression. The intent is to avoid having to create multiple models, all of which do the same thing, just for different levels of abstraction: instead, a single model can work in all of the environments.

While Multi-View components are sold pre-designed by Mentor, they also have an authoring kit that can be used to create models. But this system uses a language called C4 that they acquired through Spirotech, and until they can get more of a standard going with this, they’re not really emphasizing the ability to author your own models. And in setting their standards priorities, they’re focused more on model interoperability – which could be between one and two years away – before trying to get the C4 capabilities worked into a standard. So while you can technically do this now, it looks like it will be a while before they push it into the headlines.

The other product they’re announcing is their inFact testbench automation. Using this system, you provide a concise specification of the test goals along with a model of the design being tested, and that is synthesized into test sequences using any of a number of testing techniques. The tool uses the response of the design to track coverage and to direct the test generation intelligently. The intent is to be able to get better coverage with fewer vectors, as well as being able to take a single test specification and use it to generate different vectors for different purposes.
Combining inFact with Multi-view components can allow a single test environment generation effort to provide a result that can be used to check out both the front-end architectural design and the implementation at the RTL and gate levels. The implementation engineer doesn’t have to redo the work done by the system guy, although he or she can build upon that work. The ability to pass models from the architect to the designer and verify up and down the line with a consistent set of models and tests should not only speed things up but also build confidence, since models and testbenches aren’t constantly being redone with the potential of new errors each time.

Mentor claims that the productivity gains can be as high as 10x. They’ve worked through a number of examples that compare the typical asymptotic increase in coverage as tests are generated with a steep linear increase using inFact. This means far fewer vectors, and fewer vectors means both less simulation time and, assuming these tests become part of the actual production test suite, less test time. Faster simulation, of course, gets product to market faster, and less time on the tester equates directly to lower testing costs. Hopefully those are goals that all of the silos can agree on.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

GaN FETs: D-Mode Vs E-mode
Sponsored by Mouser Electronics and Nexperia
The use of gallium nitride can offer higher power efficiency, increased power density and can reduce the overall size and weight of many industrial, automotive, and data center applications. In this episode of Chalk Talk, Amelia Dalton and Giuliano Cassataro from Nexperia investigate the benefits of Gan FETs, the difference between D-Mode and E-mode GaN FET technology and how you can utilize GaN FETs in your next design.
Mar 25, 2024
17,314 views