feature article
Subscribe Now

Verifying Today’s SoCs Requires a New Approach

As is well known, the system-on-chip (SoC) verification problem grows faster than design size, so it takes more time and effort to verify a complete SoC than an individual IP block. However, the problems with SoC verification are deeper than just the increase in size.

The biggest new wrinkle introduced by today’s large multicore SoC is the greater number of shared resources, sometimes called “points of convergence” by verification engineers. Every level of the bus structure is hammered by multiple master agents vying for access. Every memory is accessed by multiple processors, processing engines, and peripherals, with complex rules to protect regions from being corrupted. The SoC usually has multiple high-performance interfaces, likely using a mix of standard and proprietary protocols. The activity on these interfaces is intimately connected with the processors. All this presents a major challenge for traditional verification.

Many functional verification tools and methodologies are well proven at the level of an IP component or a functional unit within the SoC. Formal analysis works well for IP and smaller units; the Universal Verification Methodology (UVM) provides a solution for simulation of large, complex functional units. However, UVM is not widely used for full-SoC simulation since it does not coordinate with the software running in the embedded processors. Clearly, a new and better approach is required.

The illustration shows the concept behind this new approach, based on the automatic generation of self-verifying C test cases that run on the embedded processors. Given the central role that processors play in production operation of the SoC, it is not at all surprising that leveraging these same processors yields more efficient and more effective verification. Generated test cases do all the sorts of operations that the SoC would do in production usage, including moving data to and from memory and various processing engines.

Breker_EE-Journal-Illustration.png

The new approach to SoC verification is based on the automatic generation of
self-verifying C test cases that run on embedded processors.

 

The key is that the automatic C test cases stress-test the design in ways that would be unlikely to occur with production code. Multiple threads from each of the embedded processors pound on the bus, the memories, the interfaces, and other shared resources. Types of operations, data values, assigned memory regions, and other aspects of SoC behavior are randomized. This approach is effective at finding corner-case bugs in the design, but it also provides valuable performance metrics such as operation latencies and bus utilization.

Since some of the operations performed by embedded processors read and write data over external interfaces, the generated C tests interface with the testbench so that required input data can be provided and the results of these operations can be checked. Existing testbench components, including those compliant with UVM, are leveraged to the fullest extent possible. However, this new approach is not just an add-on to UVM, but a fundamentally different way of addressing full-chip verification.

The process of test case generation is driven by scenario models that describe the intended operation of the SoC. Such scenario models promote productivity if they use an intuitive format and are visualized in a way that looks much like diagrams created by design and verification engineers to describe chip functionality. These models are created by “beginning with the end in mind” and working backwards from desired results to the setup conditions in the SoC needed to produce those results.

The scenario models, provided by the verification team, describe a hierarchy of functionality, from basic operations through system-level scenarios such as turning off and on power domains in a low-power design. Finally, the visualization of the scenario models includes coverage results entirely complementary to existing code and functional coverage metrics.

No other can provide the same level of verification for SoCs or even appropriate IP blocks. Testbenches alone are insufficient since they don’t leverage the embedded processors. Hand writing C tests is much too difficult: humans have a hard time visualizing and thinking about multiple threads on multiple processors. Running production code in simulation or hardware acceleration is important for other reasons but is not effective at stressing corner-case conditions.

The size, complexity, and large number of shared resources in a contemporary SoC demand a new and innovative approach to verification. Leveraging scenario models to automatically generate self-verifying C test cases with connections to the testbench exercises both regular production operation and corner cases. As more and more chips cross the SoC threshold, this approach will become the preferred method for verification.

 

About Thomas Anderson

Thomas L. Anderson is vice president of Marketing for Breker Verification Systems. His previous positions include Product Management Group director of Advanced Verification Systems at Cadence, director of Technical Marketing in the Verification Group at Synopsys, vice president of Applications Engineering at 0-In Design Automation, and vice president of Engineering at Virtual Chips. Anderson has presented more than 100 conference talks and published more than 150 papers and technical articles on such topics as advanced verification, formal analysis, SystemVerilog and design reuse. He holds a Master of Science degree in Electrical Engineering and Computer Science from MIT and a Bachelor of Science degree in Computer Systems Engineering from the University of Massachusetts at Amherst.

One thought on “Verifying Today’s SoCs Requires a New Approach”

  1. Breker says that special C routines are needed to stress the architecture of an SoC before it is built. What do you think? How do you do this now?

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

TE Connectivity MULTIGIG RT Connectors
In this episode of Chalk Talk, Amelia Dalton and Ryan Hill from TE Connectivity explore the benefits of TE’s Multigig RT Connectors and how these connectors can help empower the next generation of military and aerospace designs. They examine the components included in these solutions and how the modular design of these connectors make them a great fit for your next military and aerospace design.
Mar 19, 2024
18,001 views