feature article
Subscribe Now

The Very Model Of Reusability

Silos can be wonderful things when used properly. They keep your grain dry when it rains. They provide a handy storage. They can look lovely, providing the only real topography in an otherwise 2-D landscape. And they evoke the heart of America (well, to Americans anyway… no intent to disenfranchise the rest of the world).

Unfortunately, silos aren’t restricted to the picturesque plains of the Midwest. They exist amongst us, around us. Many of us work in silos. We are indeed kept out of the rain, and sometimes it feels like we’re in storage; we might even be picturesque. But we’re also kept from being as productive as we could be.

The progression of a system design from architect to implementation to system test tends to involve the bare minimum of interaction between silos. The architect works in one silo, ships out a specification to a designer that will implement the design, who ships the design and some specs out to a test engineer tasked with ensuring that systems shipped work correctly. Electronic System Level (ESL) design techniques are intended to break these silos and allow more of the work done up front to be incorporated into the final product, with less rework each step of the way. But ESL has been slow to catch on, and part of the reason may be that there are some sizeable gaps remaining in the design flow that severely mitigate the benefits of investing in ESL tools.

The typical system design process starts with an architect and/or system designer scoping out the high-level behaviors that the system should exhibit. This is done typically in an abstract fashion using languages like C and C++ or systems like Matlab. The designer will put together some kind of model and testbench to check out the specification, defining specific directed tests to prove that his or her ideas work. The idea is to explore all the basic desired areas of operation of the function. Increasingly, transaction-level modeling (TLM) can be used at a high level to speed up simulation.

Once that’s done, the design gets handed off to the dude that’s going to implement the design, typically using RTL (since we’re talking about digital stuff). This designer will also have to create models and a testbench, this time using RTL. And in addition to the directed tests written to prove that the key functions work properly, additional tests are also created to explore the corners of the design space, those bizarre combinations that no one expects but that could unleash the hidden darker side of the design.

Exhaustive test vectors can be generated using different techniques, but since “random” generation for complex designs can generate too many useless vectors, “constrained random” is more common – but this will still get you millions of vectors. Test coverage analysis then will typically inform the designer that there are various hard-to-reach places that didn’t get explored. The more complicated the design, the harder some areas are to reach. So more vectors are generated either manually or with manual direction to ease coverage up to an acceptable level.

There are a couple of problems here. First, even though the system designer went through the effort to create models and testbenches, they only work at a high level and can’t talk to lower-level constructs. So the designer doing the implementation is writing a new set of models and testbenches. To be really thorough, that work must be validated against the higher-level work to make sure that they’re really equivalent models and tests. The second problem is that different sets of tests are being created for different environments, and the methodology for creating those tests may not converge to high coverage as fast as you might like, essentially because random test generators more or less spray the logic space with values to get results, which isn’t the most efficient way to test (even if it is straightforward).

This illustrates the kind of gap that could be suppressing enthusiasm for ESL methodologies, since there are these disconnects between high- and low-level design phases that really diminish the benefit of the up-front high-level abstraction. Mentor Graphics has introduced a couple products that they believe will take a good step towards closing that gap.

One of those products is what they call their Multi-View components. These are models that can operate with different levels of abstraction at the interfaces. So in one test environment, they can interact using C or C++; in another environment, they can be made to use SystemVerilog or RTL. And a testbench written at for high-level validation can also be used for lower-level validation, since the model can interface with high-level models on one side and with low-level models on the other, doing a rather respectable Janus impression. The intent is to avoid having to create multiple models, all of which do the same thing, just for different levels of abstraction: instead, a single model can work in all of the environments.

While Multi-View components are sold pre-designed by Mentor, they also have an authoring kit that can be used to create models. But this system uses a language called C4 that they acquired through Spirotech, and until they can get more of a standard going with this, they’re not really emphasizing the ability to author your own models. And in setting their standards priorities, they’re focused more on model interoperability – which could be between one and two years away – before trying to get the C4 capabilities worked into a standard. So while you can technically do this now, it looks like it will be a while before they push it into the headlines.

The other product they’re announcing is their inFact testbench automation. Using this system, you provide a concise specification of the test goals along with a model of the design being tested, and that is synthesized into test sequences using any of a number of testing techniques. The tool uses the response of the design to track coverage and to direct the test generation intelligently. The intent is to be able to get better coverage with fewer vectors, as well as being able to take a single test specification and use it to generate different vectors for different purposes.
Combining inFact with Multi-view components can allow a single test environment generation effort to provide a result that can be used to check out both the front-end architectural design and the implementation at the RTL and gate levels. The implementation engineer doesn’t have to redo the work done by the system guy, although he or she can build upon that work. The ability to pass models from the architect to the designer and verify up and down the line with a consistent set of models and tests should not only speed things up but also build confidence, since models and testbenches aren’t constantly being redone with the potential of new errors each time.

Mentor claims that the productivity gains can be as high as 10x. They’ve worked through a number of examples that compare the typical asymptotic increase in coverage as tests are generated with a steep linear increase using inFact. This means far fewer vectors, and fewer vectors means both less simulation time and, assuming these tests become part of the actual production test suite, less test time. Faster simulation, of course, gets product to market faster, and less time on the tester equates directly to lower testing costs. Hopefully those are goals that all of the silos can agree on.

Leave a Reply

featured blogs
Mar 28, 2024
'Move fast and break things,' a motto coined by Mark Zuckerberg, captures the ethos of Silicon Valley where creative disruption remakes the world through the invention of new technologies. From social media to autonomous cars, to generative AI, the disruptions have reverberat...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

FleClear: TDK’s Transparent Conductive Ag Film
Sponsored by Mouser Electronics and TDK
In this episode of Chalk Talk, Amelia Dalton and Chris Burket from TDK investigate the what, where, and how of TDK’s transparent conductive Ag film called FleClear. They examine the benefits that FleClear brings to the table when it comes to transparency, surface resistance and haze. They also chat about how FleClear compares to other similar solutions on the market today and how you can utilize FleClear in your next design.
Feb 7, 2024
7,154 views