feature article
Subscribe Now

Using VMM, DPI, and TCL to Leverage Verification and Enable Early Testing, Emulation, and Validation

The Details

Let’s face it.  Some designers refuse to learn a new language.  Or, the prospect of learning object-oriented programming makes some people break out in hives.  Or, the old way is still just fine.  Or, there’s not enough time in the schedule to get everyone trained.  Or, there’s no budget for training.

Whatever the case may be, you find yourself at the beginning of a project, and there are not enough people to do the verification work needed.  Most designers have learned TCL scripting somewhere along the way, or are much more amenable to learning it, for some reason, than SystemVerilog.  In our case, we are a VHDL house, so the language wars made it easier to push TCL than SystemVerilog. 

Challenges that designers face include design integration and bring-up in simulation, on emulation platforms or during silicon validation.  Both groups face common pressures in the form of a lack of time, manpower, budget, or other crucial resources. 

Design engineers write many tests during development to debug their designs prior to integration into the top-level system but throw them away when it is time to integrate.  They spend a lot of time writing separate block-level testbenches and crafting tests to exercise specific features. Once everything works and is integrated into the system, a lot of these initial debugging tests are lost or become unusable, mostly because they are no longer maintained once the full-chip simulation is running. Just as assertions are written best when designers are designing their blocks, the debug tests can be valuable to achieve initial coverage of major features when written while designers are coding their blocks. 

Traditionally, designers rewrite their block-level tests for top-level integration. Verification engineers then rewrite the tests again for the verification environment. Finally, the tests get rewritten a fourth time for emulation and silicon validation. At LSI, we thought that if we could come up with a way to reuse the same tests across each application, that we would be able to save both time and manpower for many groups. Our solution was to replace the traditional VMM Test layer with a TCL interpreter to unify all of these tests, and prevent multiple teams from having to reinvent the wheel over and over again.

Using this approach, designers bypass block-level testing and design their blocks directly into the top-level system. They then use the TCL verification environment to write debug tests to test their blocks. These same tests can be used by the verification team as a head start for coverage and as an executable spec for constrained random testing. When it comes time to do emulation or silicon validation, these same tests can be run in the lab on a TCL interpreter that is layered on top of software drivers, instead of in the simulation environment.

We found that if we use the DPI to integrate a TCL interpreter and to serve as the VMM test layer, that this enables design engineers who don’t know SystemVerilog to use the system-level verification environment very early in the process. The design engineers write directed tests to debug their initial designs using a process that doesn’t require them to learn a new language.  By using DPI, we can also share test data structures and tasks between VMM and TCL, which allows verification development to occur in conjunction with directed test development, with large amounts of shared infrastructure within the simulation environment. From the TCL test layer, constrained randomization from SystemVerilog can be called to fill DMA transactions, randomize packets, or almost anything else that is needed. 

Since our Device Under Test (DUT) is a PCI Express device that plugs into a server, the verification environment can be extended to the lab very smoothly. With proper specification of the TCL layer, directed TCL tests from simulation can be run directly on the hardware without the need to make any changes. 

Providing this ability to designers early in the design process allows them to write a large number of directed tests, which is what they naturally want to do.  Verification engineers are able to leverage designers’ setup and test procedures, which allows them to begin constrained random testing even sooner.  Design engineers achieve a seamless transition to lab testing, both with FPGA emulation and eventual silicon validation.  And, ultimately, this allows the entire ASIC team to save time and manpower by eliminating most of the test rewriting that occurs.

Figure 1 illustrates the verification environment that I created, consisting of a TCL, C, and SystemVerilog portion that interface with Verilog/Vera verification IP. This last layer talks directly to the DUT, which was written in VHDL. The drawing illustrates the intercommunication between the languages and the blocks. The blocks are shown for illustration; their specific detail is beyond the scope of this article.

Figure 1 – Testbench block diagram with languages used

Benefits of a TCL Test Layer

Early RTL Testing and Integration

It really doesn’t take much time to bring up a basic VMM framework with TCL layered on top of it.  All you really need to do is integrate the TCL interpreter and then export functions to access low-level transactors, like system interfaces, the Register Abstraction Layer, etc.  After that, the designers are set to be off and running.

In our case, the test environment design started around the same time as the RTL design, and the basic VMM/TCL testbench was available by the time the initial blocks were being integrated at the system level.  No verification tests were written at that point, but the testbench provided the functionality for designers to be able to write debug tests.  Block-level testing was no longer needed, and we saved integration time because testing was done directly in-system. 

Emulation

For emulation, the debug and directed tests provide a good set of baseline tests to verify that the FPGA place and route is good, the FPGAs are programmed correctly, and the lab setup is operational.  They also give you a good set of initialization functions to incorporate into emulation tests. 

Obviously verification tests using VMM-specific or SystemVerilog-specific features will not be available in the lab, but things like coverage callbacks, which can be used to measure coverage during constrained random tests, can be bypassed in the lab version of the TCL interpreter so the debug and directed tests can still run.  Driver-specific calls for the lab can be bypassed for the simulation platform.

When a bug is found in emulation, an emulation script can often be written to allow the bug to be hit fairly quickly. The lab emulation script can then be moved to simulation, which gives much better visibility for tracing bugs.

If the lab debug and emulation environment is already TCL-based, then there might already be a library of common functions that can be leveraged for simulation testing. The main thing to do in this situation is to come to an agreement between the simulation platform and the emulation platform on what commands are needed, what their syntax is, and make sure everyone stays in sync with new features and commands.

Verification

The RTL engineers write debugging tests for their blocks to test most of the major features of their blocks.  Later on, verification engineers can utilize these debugging tests as directed tests.  In addition, since the debugging tests are written on top of SystemVerilog/VMM, they can insert coverage callbacks, which were omitted in the lab version, from the TCL tests to SystemVerilog coverage packages and get a baseline measurement of how much is left to verify, and in what areas. 

In addition to coverage callbacks, other basic functions can be exported to TCL, like the SystemVerilog “rand” function (for seed control) and VMM note/warning/error logging. We were able to make a hybrid environment available that was still tightly coupled with a lot of what VMM has to offer for designers who wrote random tests for their blocks.

Since these debugging tests are written as the system is being integrated and is coming up, we can also leverage the designer’s initialization and startup routines directly.  We can use their debugging tests almost as an executable spec for how to use their blocks when we start writing real verification tests. We can also pick randomization constraints and control generators, checkers, monitors and scoreboards from the TCL test layer.

Silicon Validation

If we run a common simulation and emulation TCL shell, by the time the silicon comes back, we should have a fairly extensive set of tests that were run in simulation. These tests can also be run in the lab against the real silicon on pretty much every major block.

Implementation in Testbench

Overview

The standard VMM environment flow proceeds through its stages as normal, as shown in Figure 2, with an added Preconfig stage that sets up the bus configuration and register space.  When the test reaches the VMM Start stage, it launches a TCL interpreter that executes a given TCL script as its test. 

Figure 2 – Test environment functional flow

To make it useful, we make simple, low-level functions such as bar0_read and bar0_write callable from the TCL script.  Transaction-level function calls seem to work the best to enable debug testing. To support basic verification practices, we make simple coverage callbacks available, and we redefine the TCL puts command to call vmm_note.  We also export two other basic functions, vmm_error and vmm_warning.

To make it useful for more advanced verification, we also export constraint control functions and functions to control generators, checkers, monitors, and scoreboards.  All of these verification control functions cannot be duplicated in the lab, and will be bypassed in the TCL interpreter for the emulation environment.

Test environment flow

Starting from a bare-bones VMM environment, we add a TCL test layer.  When the VMM environment reaches the Start stage, a SystemVerilog task named start_tcl is called after all of the other transactors are started.  The start_tcl task calls an imported C function named sv_call_tcl, which launches the TCL interpreter.  When the TCL script is being executed, user-defined TCL procedures can be called that in turn execute exported SystemVerilog tasks.  These can be used for things like accessing DUT registers or configuring and kicking off generators and checkers. Once the end of the TCL script is reached, it will trigger an event which tells the wait_for_end VMM environment phase that the test is done.

Starting the TCL interpreter

The VMM start task is as follows, with other transactors eliminated for clarity:

task pcie_rvm_env::start();
    super.start();

    fork
      test_top.tb.start_tcl();
    join_none
endtask

The start_tcl task looks like this:

program automatic stimulus(peripherals_if.Tb periph_ifc) ;

import “DPI” context task sv_call_tcl();
event tcl_done;

task start_tcl();
  fork
    sv_call_tcl();
    wait(tcl_done.triggered);
  join_any
 
  $display(“Finished tcl test file.”);
endtask

The sv_call_tcl function is written in C.  It just calls the tcl_Main function out of the tcl.h library that is included in most standard TCL installations.  The tcl_Main function will not return until the TCL test is finished.  The tcl_AppInit argument of tcl_Main is an initialization function that is described in the next section.  The sv_call_tcl function looks like this:

#include “svdpi.h”
#include “tcl.h”

void sv_call_tcl() {
  // Argv[0] has to be the calling function’s name.  We
  // can make this up.
  int a_argc = 1;
  char name[5] = “simsh”;
  char *a_argv[1];
  a_argv[0] = name;

  tcl_Main(a_argc, a_argv, tcl_AppInit);
}

Exiting the TCL interpreter

When the test is complete, the exit command is called from the script, which lets the VMM environment know we are finished and proceed through the rest of its stages.  This function call will be aliased to the command exit in the TCL interpreter.  The sv_notify_done() function is an exported SystemVerilog task that triggers an event to let the VMM wait_for_end stage know that we are finished. 

The exit_to_vmm function looks like this:

static int exit_to_sim(dummy, interp, objc, objv)
  ClientData dummy;         // Not used
  tcl_Interp *interp;
  int        objc;
  tcl_Obj    *CONST objv[];
{
  sv_notify_done();
}

Initializing the TCL interpreter

The function tcl_AppInit is used by tcl_Main to specify what TCL script to parse, and to define custom commands in the interpreter for the script to use.  In this simple example, we create a bar0_read and bar0_write command, and then delete and redefine the TCL exit command to call the exit_to_sim function that we defined in the last section.

The tcl_AppInit function looks like the following.  First, we register a couple of user-defined TCL procedures.  These bind a new TCL command to a C function.  Then we delete the exit command and replace it with our own version that lets the VMM environment know we are done.  Finally, we execute a TCL script named input.tcl.

////////////////////////////////////////////////////////////////
//  tcl_AppInit
////////////////////////////////////////////////////////////////
extern int tcl_AppInit(tcl_Interp *interp)
{
  FILE    *Fptr;

  // —- Register New tcl Commands —-
  tcl_CreateObjCommand(interp,”bar0_read”,    bar0_read,     (ClientData)NULL,(tcl_CmdDeleteProc *)NULL);
  tcl_CreateObjCommand(interp,”bar0_write”,   bar0_write,    (ClientData)NULL,(tcl_CmdDeleteProc *)NULL);

  // —- Delete and replace “exit” tcl Command —-
  tcl_DeleteCommand(interp, “exit”);
  tcl_CreateObjCommand(interp,”exit”,  exit_to_sim,   (ClientData)NULL,(tcl_CmdDeleteProc *)NULL);

  // —- Register Exit Handlers —-
  tcl_CreateExitHandler((tcl_ExitProc *)cpsCleanup, (ClientData)NULL);

  // —- Execute tcl script —-
  tcl_EvalFile(interp, “input.tcl”);

  return tcl_OK;
}

New commands are defined through the tcl_CreateObjCommand routine.  The first argument is the handle to the interpreter that was created by tcl_Main.  The second argument is the name of the TCL command that is being created, and the third argument is the C-function that is called when the TCL command is used.

Argument passing from TCL commands to SystemVerilog.

Shown below is an example of passing arguments between a TCL command and SystemVerilog.  We will extend the bar0_read example started earlier.  The following C code creates a bar0_read command in the TCL interpreter, and maps it to the C function named bar0_read.  The function passes an address to an exported SystemVerilog task, which performs the read and passes the read data back when the transaction is complete.  Here’s what the bar0_read C function looks like that is called by the bar0_read TCL command that was defined in tcl_AppInit:

//////////////////////////////////////////////////////////////////////////
// bar0_read
//   Syntax:
//   bar0_read byte_offset\n”
//////////////////////////////////////////////////////////////////////////
static int bar0_read(dummy, interp, objc, objv)
  ClientData dummy;         // Not used
  tcl_Interp *interp;
  int        objc;
  tcl_Obj    *CONST objv[];
{
  tcl_Obj *objPtr;
  unsigned int Data, ByteOffset;

  // 1st Argument – Byte Offset
  tcl_GetIntFromObj(interp, objv[1], &ByteOffset);

  // SV PCI read routine.
  sv_bar0_read((int)ByteOffset, (int)&Data);

  // Return data to tcl call
  objPtr = tcl_NewIntObj(Data);
  tcl_SetObjResult(interp, objPtr);
  return tcl_OK;
}

The function call sv_bar0_read is an exported SystemVerilog task that performs the transaction and passes back the returned read data.  The returned results are then sent back to the TCL interpreter using the tcl_SetObjResult function, which is provided by the TCL library.

Real-world Results and Lessons Learned

At the start of this project, I was just learning SystemVerilog and VMM, so my learning curve was definitely longer than should be typical.  It took around six weeks to develop the basic VMM environment and low-level transactors, including integrating a PCI Express Verification IP (VIP) block from Synopsys.  It then took another four weeks or so to add in the TCL interpreter. 

I am implementing the same TCL environment for our next chip, and it has only taken a couple of weeks to get to the point where the basic VMM/TCL environment is up and running and design engineers can start using it to test new features. 

I did spend more time writing and cleaning up TCL code than I had originally hoped. In the end, however, it helped to make common functions generic and organize these functions into libraries.  That ultimately saved a lot of time in user support and test quality.  User support time also decreased as the environment matured and users became more familiar with it.

An unanticipated benefit occurred when we added support for a HyperTransport system interface, which replaces PCI Express in some products. I was able to swap out the low-level SystemVerilog transactor, and the TCL scripts were divorced from the low-level functionality enough so that no changes were needed to make those scripts work in either mode.  The VMM layered model made this very easy.

In order to keep the TCL scripts compatible with both simulation and lab environments, the engineers in charge of maintaining the simulation TCL shell and the lab TCL shell need to work together on existing commands and implementing new commands and features.  Even when there are some features that are impossible to implement in the lab, like constrained randomization, the related commands in the lab environment should call an empty function that prints a warning message.

Conclusions and Recommendations

Integrating a TCL layer is definitely an involved process and should be specified out carefully.  However, there are several tangible benefits.  Designers actually save time writing debug tests in two ways.  First, they don’t have to create block level testbenches from scratch.  Secondly, they can share common low-level functions for things like initialization, register programming, DMA control, and other things of that nature.  It allows us to leverage the myriad debug tests written by design engineers to get a head start on functional coverage.  It also enables commonality between simulation and the lab environment, for both emulation and validation.  After all is said and done, it saved us a tremendous amount of time, not just in verification, but in design debug and in eventual silicon validation.

We have a solid simulation platform that the designers like using and that we are leveraging for our next design.  We have extended it from just a PCI Express testbench to using HyperTransport, and more recently to PCI-X, and it seems just as flexible and maintainable as a regular SystemVerilog/VMM verification environment. 

We are also extending this environment to perform hardware/software co-simulation by leveraging the same SystemVerilog tasks we exported over the DPI and interfacing to our software drivers instead of the TCL interpreter.

 

 

 

Leave a Reply

featured blogs
Oct 9, 2024
Have you ever noticed that dogs tend to circle around a few times before they eventually take a weight off their minds?...

featured chalk talk

Reliability: Basics & Grades
Reliability is cornerstone to all electronic designs today, but how reliability is implemented and determined can vary widely by different market segments. In this episode of Chalk Talk, Amelia Dalton and Sam Accardo from the YAGEO Group explore the definition of reliability for electronic components, investigate the different grades of reliability offered by the YAGEO Group and the various steps that the YAGEO Group is taking to ensure the greatest reliability of their components.
Aug 15, 2024
34,108 views