feature article
Subscribe Now

The Very Model Of Reusability

Silos can be wonderful things when used properly. They keep your grain dry when it rains. They provide a handy storage. They can look lovely, providing the only real topography in an otherwise 2-D landscape. And they evoke the heart of America (well, to Americans anyway… no intent to disenfranchise the rest of the world).

Unfortunately, silos aren’t restricted to the picturesque plains of the Midwest. They exist amongst us, around us. Many of us work in silos. We are indeed kept out of the rain, and sometimes it feels like we’re in storage; we might even be picturesque. But we’re also kept from being as productive as we could be.

The progression of a system design from architect to implementation to system test tends to involve the bare minimum of interaction between silos. The architect works in one silo, ships out a specification to a designer that will implement the design, who ships the design and some specs out to a test engineer tasked with ensuring that systems shipped work correctly. Electronic System Level (ESL) design techniques are intended to break these silos and allow more of the work done up front to be incorporated into the final product, with less rework each step of the way. But ESL has been slow to catch on, and part of the reason may be that there are some sizeable gaps remaining in the design flow that severely mitigate the benefits of investing in ESL tools.

The typical system design process starts with an architect and/or system designer scoping out the high-level behaviors that the system should exhibit. This is done typically in an abstract fashion using languages like C and C++ or systems like Matlab. The designer will put together some kind of model and testbench to check out the specification, defining specific directed tests to prove that his or her ideas work. The idea is to explore all the basic desired areas of operation of the function. Increasingly, transaction-level modeling (TLM) can be used at a high level to speed up simulation.

Once that’s done, the design gets handed off to the dude that’s going to implement the design, typically using RTL (since we’re talking about digital stuff). This designer will also have to create models and a testbench, this time using RTL. And in addition to the directed tests written to prove that the key functions work properly, additional tests are also created to explore the corners of the design space, those bizarre combinations that no one expects but that could unleash the hidden darker side of the design.

Exhaustive test vectors can be generated using different techniques, but since “random” generation for complex designs can generate too many useless vectors, “constrained random” is more common – but this will still get you millions of vectors. Test coverage analysis then will typically inform the designer that there are various hard-to-reach places that didn’t get explored. The more complicated the design, the harder some areas are to reach. So more vectors are generated either manually or with manual direction to ease coverage up to an acceptable level.

There are a couple of problems here. First, even though the system designer went through the effort to create models and testbenches, they only work at a high level and can’t talk to lower-level constructs. So the designer doing the implementation is writing a new set of models and testbenches. To be really thorough, that work must be validated against the higher-level work to make sure that they’re really equivalent models and tests. The second problem is that different sets of tests are being created for different environments, and the methodology for creating those tests may not converge to high coverage as fast as you might like, essentially because random test generators more or less spray the logic space with values to get results, which isn’t the most efficient way to test (even if it is straightforward).

This illustrates the kind of gap that could be suppressing enthusiasm for ESL methodologies, since there are these disconnects between high- and low-level design phases that really diminish the benefit of the up-front high-level abstraction. Mentor Graphics has introduced a couple products that they believe will take a good step towards closing that gap.

One of those products is what they call their Multi-View components. These are models that can operate with different levels of abstraction at the interfaces. So in one test environment, they can interact using C or C++; in another environment, they can be made to use SystemVerilog or RTL. And a testbench written at for high-level validation can also be used for lower-level validation, since the model can interface with high-level models on one side and with low-level models on the other, doing a rather respectable Janus impression. The intent is to avoid having to create multiple models, all of which do the same thing, just for different levels of abstraction: instead, a single model can work in all of the environments.

While Multi-View components are sold pre-designed by Mentor, they also have an authoring kit that can be used to create models. But this system uses a language called C4 that they acquired through Spirotech, and until they can get more of a standard going with this, they’re not really emphasizing the ability to author your own models. And in setting their standards priorities, they’re focused more on model interoperability – which could be between one and two years away – before trying to get the C4 capabilities worked into a standard. So while you can technically do this now, it looks like it will be a while before they push it into the headlines.

The other product they’re announcing is their inFact testbench automation. Using this system, you provide a concise specification of the test goals along with a model of the design being tested, and that is synthesized into test sequences using any of a number of testing techniques. The tool uses the response of the design to track coverage and to direct the test generation intelligently. The intent is to be able to get better coverage with fewer vectors, as well as being able to take a single test specification and use it to generate different vectors for different purposes.
Combining inFact with Multi-view components can allow a single test environment generation effort to provide a result that can be used to check out both the front-end architectural design and the implementation at the RTL and gate levels. The implementation engineer doesn’t have to redo the work done by the system guy, although he or she can build upon that work. The ability to pass models from the architect to the designer and verify up and down the line with a consistent set of models and tests should not only speed things up but also build confidence, since models and testbenches aren’t constantly being redone with the potential of new errors each time.

Mentor claims that the productivity gains can be as high as 10x. They’ve worked through a number of examples that compare the typical asymptotic increase in coverage as tests are generated with a steep linear increase using inFact. This means far fewer vectors, and fewer vectors means both less simulation time and, assuming these tests become part of the actual production test suite, less test time. Faster simulation, of course, gets product to market faster, and less time on the tester equates directly to lower testing costs. Hopefully those are goals that all of the silos can agree on.

Leave a Reply

featured blogs
Jun 7, 2023
We explain how semiconductor designers create reliable, safe, and secure aerospace designs by leveraging IP and standards from automotive chip designs. The post Why Aerospace Semiconductor Designers Are Taking a Page from Their Automotive Friends appeared first on New Horizo...
Jun 6, 2023
At this year's DesignCon, Meta held a session on '˜PowerTree-Based PDN Analysis, Correlation, and Signoff for MR/AR Systems.' Presented by Kundan Chand and Grace Yu from Meta, they talked about power integrity (PI) analysis using Sigrity Aurora and Power Integrity tools such...
Jun 2, 2023
I just heard something that really gave me pause for thought -- the fact that everyone experiences two forms of death (given a choice, I'd rather not experience even one)....

featured video

The Role of Artificial Intelligence and Machine Learning in Electronic Design

Sponsored by Cadence Design Systems

In this video, we talk to Paul Cunningham, Senior VP and GM at Cadence, about the transformative role of artificial intelligence and machine learning (AI/ML) in electronic designs. We discuss the transformative period we are experiencing with AI and ML and how Cadence is revolutionizing how we design and verify chips through “computationalizing intuition” and building intuitive systems that learn and adapt to the world around them. With human lives at stake, reliability, and safety are paramount.

Learn More

featured paper

EC Solver Tech Brief

Sponsored by Cadence Design Systems

The Cadence® Celsius™ EC Solver supports electronics system designers in managing the most challenging thermal/electronic cooling problems quickly and accurately. By utilizing a powerful computational engine and meshing technology, designers can model and analyze the fluid flow and heat transfer of even the most complex electronic system and ensure the electronic cooling system is reliable.

Click to read more

featured chalk talk

Inductive Position Sensors for Motors and Actuators
Sponsored by Mouser Electronics and Microchip
Hall effect sensors have been quite popular for a variety of applications for many years but inductive positions sensors can provide better accuracy, better noise immunity, can cost less,  and can reject stray magnetic fields. In this episode of Chalk Talk, Amelia Dalton chats with Mark Smith from Microchip about the multitude of benefits that inductive position sensors can bring to automotive, robotic and industrial applications. They also check out the easy to use kits that can help you get started using them for your next design.
Dec 19, 2022
22,902 views