When it comes to system-on-chip verification, two trends have become painfully obvious: it is expensive and it takes too long. Consider, for example, that the most expensive parts of today’s SoC design flow are the tasks where the engineer must engage in direct manual effort or expend energy making decisions. In the case of verification, far too much time and money are wasted on tasks that don’t add value, such as trying to figure out how supposedly-correct intellectual property (IP) is actually working, debugging “dumb” errors or deciding what signals to record in any given simulation run. While improved design tools and methodologies, coupled with higher levels of abstraction, have made some headway in shortening design and verification times, the time required to determine the root cause of problems found in large, long simulations is growing.
Having a clear understanding of a design’s internal behavior is critical to effectively debugging complex chip problems, but this would further exacerbate the cost and time associated with verification methodologies like simulation since attaining any visibility only serves to slow down simulation. And, until now, cost-effectively gaining visibility to a large, complex design’s internal signals has been impossible. New visibility enhancement technology offers the best hope of confronting these issues by enabling the engineer to quickly and easily gain access to the necessary signals that will aid in the analysis and debug of today’s complex SoCs.
The Simulation Dilemma
he problem with gaining full visibility to a design’s internal signals lies with the unpredictability of the simulation process. With small block-level simulation, it is practical to record every value change on every signal. This produces a rich database of time-ordered event data that can be used by the engineer to understand the block’s behavior and debug errors when a problem is flagged by the regression testbench. However, this data comes at a penalty. Running a simulation that saves all the signal values for debug significantly slows the simulation time.
When dealing with large subsystems or full chip-level simulation, the effect of this slow down can be dramatic. The overhead required to record all events on all of the signals overwhelms the run time and fills the available disk space. In fact, run times can explode by a factor of five while disk requirements can run into the 100s of gigabytes.
Because of the extreme expense associated with recording all of this data, engineers often record virtually no information. The first simulation run is almost always executed with no recording at all. Since engineers are generally optimistic by nature, they simply run the design with the testbench checking for problems that they don’t really expect to find. When the testbench flags a mismatch between the expected results and the actual behavior, the verification methodology breaks down. The engineering team then has to figure out what to dump. Does it turn on full recording of every signal for the entire run and risk filling disk space? Or, does it selectively dump based on a best-guess judgment as to where the problem lies—knowing full well that a third run may be necessary if the guess is incorrect?
The lack of predictability in this process is a huge problem, particularly since full-chip simulation runs often occur at the end of the verification cycle. Finding a problem at this late date leaves the team with few enviable options. It can opt to build the chip, all the while hoping that the problem is not in the hardware. Alternatively, it can hold up the fabrication process until the cause of the mismatch has been tracked down. Assuming it takes one day for the no-recording simulation, it would then take roughly five days for the full-dump pass. Or, it might take two days to complete a partial dump that may or may not generate the needed data. Either way, the result for the engineering team is the same: increased cost and lost time.
Consequently, when it comes to simulation, the choice is either faster speed with little or no visibility into the design or full signal visibility with the penalty of very slow simulation. Clearly the ability to observe signals impacts verification time. The trick is to find a way to minimize any negative impact or penalty while achieving full visibility—or at least enough visibility to effectively debug complex chip problems.
Enhancing Design Visibility
Visibility enhancement technology offers a viable means of addressing the signal observability versus impact (e.g., simulation performance, file size and verification time) trade off by allowing for partial signal dumping, while still providing full visibility for debug. A correlation engine automatically maps the gate-level signal names in the simulation dump file to the RTL names. By providing a familiar, RTL-centric debug environment that offers full signal visibility with limited impact on simulation performance, visibility enhancement technology effectively accelerates the hardware designer’s verification process.
Figure 1. Typical flow of a visibility enhancement system
|
So how exactly does a visibility enhancement system work? The basic technology flow involves a number of key steps as illustrated in Figure 1. These steps include:
Step 1: Read the RTL source code or gate-level netlist into the system. The code is compiled and analyzed to fully understand the functionality of the design at a behavioral level, including clock synchronization.
Step 2: Analyze the design to determine the minimum and sufficient set of signals to be saved for full visibility after simulation. The output of this process is a list of signals that can be read into the simulator for dumping. This analysis provides the optimal set of signals for the entire design or any specified sub-block. Visibility enhancement technology typically requires around 15 percent of the signals inside the block and produces a gain of 95 to 100 percent full visibility after simulation. This minimum set of signals is determined using a recursive analysis of the desired block signals and corresponding fan-in signals.
Step 3: Perform simulation. With a partial list of signals to dump, as compared to a full signal dump, simulation runs much faster.
Step 4: Using the VCD or FSDB file, along with the RTL or netlist, expand the necessary signals “on the fly” for the path of interest during debug. Note that only the data needed for the signals-of-interest for a specified time window is calculated. Doing so eliminates the time wasted by the engineer in calculating signals not needed for the debug task.
Eyeing the Benefits
A key benefit of visibility enhancement technology is that it can improve the time and resources spent on regression simulations. According to recent benchmarks, when the technology is employed simulation times are typically only 20 percent greater than a simulation with no signal dumping. This represents a significant reduction in run time when compared to a full dump that can take more than twice as long as simulation with no dumping. The benchmarks also show that the reduction in file size is greater than 80 percent and even greater for gate-level simulations.
Other critical benefits realized through the use of visibility enhancement technology are:
-
Saves time. The previously mentioned benchmarks successfully demonstrated that it is time effective to use a partial dumping method for all regression simulation runs. If the test passes, the file can be deleted. If the test fails, the engineer can immediately start debug and much sooner in the design process than would be allowed with traditional methodologies.
The graphic in Figure 2 illustrates this fact by showing the time saved for a 50 million equivalent-gate/RTL simulation. Note that the overhead for the visibility enhancement methodology is 11 percent above the simulation with no signal dumping and there is a 75 percent reduction in the simulation or Fast Signal Database (FSDB) file size.
Figure 2. Time savings comparison between signal dumping
with and without visibility enhancement technology.
-
Eliminates guesswork. With visibility enhancement technology, the guesswork associated with selecting which signals or blocks to dump is completely eliminated.
-
Provides effective performance. Visibility enhancement technology offers a much more effective solution than waiting for a second simulation with full or selective dumping as shown in Figure 3. The first methodology cited uses no visibility enhancement. The timeline illustrates the time needed for a partial signal dumping and possible iterative simulation approach. Note that this methodology often leads to an unpredictable total time for debug. The second methodology cited also uses no visibility enhancement technology and represents the time needed for full signal dumping. While the simulation time is long, it is also predictable.
The last methodology cited in Figure 3 deploys visibility enhancement technology. It allows for a single simulation with only a 25 percent overhead compared to simulation with no signal dumping. The simulation dump file is 85 percent smaller compared to a full signal dump. As a result, the engineer can start debugging much faster with the assurance that only one debug session is required.
Figure 3. Comparison between simulation methodologies that do and do not employ
visibility enhancement technology versus a simulation methodology that does
use visibility enhancement technology.
The Bottom Line
As the complexity of SoCs continues to increase so to does the need for fast, efficient and cost-effective verification methodologies. Simulation, a key verification method, is prevented from achieving these goals due to a lack of signal observability. Slow simulation times and lack of visibility into the design are all problems hindering observability. Visibility enhancement technology addresses this challenge by providing full signal observability with a significant reduction in overhead on simulation. The verification cost savings include smaller simulation dump files, faster run times, fewer iterations, getting to debug sooner, and increased overall productivity with limited resources of people, tools, time, and hardware — benefits which for many of today’s SoC designers may ultimately mean the difference between chip success or failure.
About the Author
Martin Rowe is a Technical Manager at SpringSoft, formerly Novas Software. He has been with SpringSoft/Novas Software for six years. Prior to that, Martin was as an Application Engineer in Europe and also worked at Chrysalis Symbolic Design. He started his engineering career as an ASIC design engineer.