Factors Driving Change in FPGA Debugging Techniques
The ability to reprogram an FPGA has been a key benefit during the functional debug of a hardware design. If the design is not working correctly, the ability to add “debug hooks” has been used by engineers since the earliest use of CPLDs and FPGAs. Initially, signals internal to the FPGA that needed to be observed were brought out to pins and then external logic analyzers were used to capture the data. However, as design complexity and size have increased, this approach is no longer adequate for several reasons. First, as FPGA design capacity has increased, the number of pins has increased at a much slower rate. Consequently, the ratio of available logic to available I/Os has decreased over time, as shown in Figure 1. Additionally, the number of free I/Os available for debug purposes once the design is complete is often few to none.
Second, design complexity now often requires many signals to be observed instead of just a few. A common technique is to implement wider internal buses to achieve high system throughput in larger FPGAs. If an internal 32-bit bus is suspected of having bad data, a few I/O pins are clearly insufficient to determine the problem.
Third, complex functions often need to be tested in-system. In this case, access to I/Os may be limited to physical access to a board when in-system. New package types also limit physical access to the FPGA pins. System speed also can be an issue because probe connections can cause performance or noise signal degradation.
Finally, a major factor driving a change in the debugging of FPGAs is the availability of new tools that use internal or embedded logic analyzers.
As with all tools, the best results are obtained by using these tools for what they do best, rather than using them the same way as previous tools. Both internal and external logic analyzers are constrained by resources, static parameters and dynamic parameters. This article compares these constraints for both types of tools, and examines how best to utilize internal logic analyzers.
Limitations of External Logic Analyzers
External logic analyzers have been in use for decades. A significant benefit of the external logic analyzer is its ability to store a large volume of signal information or trace data. Configurations vary, but most external logic analyzers can store several megabytes of data. To use an external logic analyzer with an FPGA, the data must first be routed off-chip. This is done in one of two ways. The first is to directly route the signals to be observed to I/O pins. Depending on the FPGA package type, accessing the I/O pins can be difficult. Boards designed for debugging in this manner typically have connectors, such as a MICTOR connector, designed into the board that are connected to the FPGA. However, this method is very inefficient because an I/O is needed for each signal.
The second method used is to insert a core used to route signals to the I/Os. The advantage of this method is that the core can be designed to multiplex the signals to the I/O pins, allowing the pins to be shared. The limitation of this method is that signals need to be captured in real time by the external logic analyzer, and multiplexing significantly decreases the fastest possible capture rate. For this reason, either a 2x or 4x multiplexing scheme is commonly used. This means that 32 I/O pins can now support 64 or 128 signals. This is a significant improvement, but still a limitation if, for example, values on a wide bus are being debugged. Once signals are connected to the external logic analyzer, it then can be used to set up the triggering and data capture conditions.
At this point the constraints placed on using the external logic analyzer are limited signals, high speed triggering logic and large amounts of available trace memory. Most logic analyzers use a state machine-type triggering mechanism. The user specifies a value to wait for on the signals, and then either captures the data or goes to another state to look for a different condition. The signals themselves are static, but the conditions are dynamic and can be changed at any time. This approach works well, given the constraints. Since the number of signals is limited, the number of operations possible on the combination of the signals is reduced. But since trace memory is relatively large, it’s common to try to find a trigger condition close to the desired observation point, and then capture large amounts of data to find the problem.
Using an Internal Logic Analyzer
An internal logic analyzer performs the same debugging function for an FPGA as an external logic analyzer, but the constraints are completely different. An internal logic analyzer uses one or more logic analyzer cores that are embedded in the FPGA design. The designer uses a PC to set the trigger conditions in software, which normally talks to the FPGA via JTAG. Once the logic analyzer core captures the data, it is transferred back to the PC via JTAG and can then be viewed by the designer. The number of signals available is limited only by the complexity of the triggering logic and the size of the trace memory. Most implementations allow hundreds or thousands of signals to be observed.
Triggering logic resources are limited to the space available in the FPGA that is not used by design logic, and trace memory is limited to the available RAM in the FPGA that is not used by design logic. Some implementations require RAM for trace memory; and some allow either RAM or LUTs to be used. However, all implementations offer significantly less trace memory than external logic analyzers, usually on the order of several thousand bits compared to several million bits. Triggering and data capture can occur at the full speed of the design, since the signals do not need to be multiplexed off the FPGA. As with an external logic analyzer, the signals have to be statically defined. Changing signals often requires the FPGA to be re-implemented, although some tools offer the ability to change some or all of the connected signals with only an incremental route of the FPGA. Most implementations allow some or all of the trigger conditions to be dynamically changed during debug; however, the complexity of the triggers varies depending on which tool is used. The difference in more available signals, significantly less available memory and different triggering options drive the need to use the internal logic analyzer differently in order to get the best results.
One example of a complex debugging issue is looking for a particular pixel in the display of an SMPTE SDI HD display. In this particular case, it would be necessary to find the EAV (end active video) sequence, then look for the particular line number relevant to the desired data, then look for the SAV (start active video) sequence. Finally, the necessary number of words is counted to correspond with the desired pixel in the line (Figure 2).
To find this kind of data for debugging requires looking for a sequence of values, then a particular value, followed by an ending sequence and finally counting a number of clocks before capturing data. To understand how to do this, it is necessary to look at a particular implementation. Lattice’s Reveal hardware debugger uses trigger units and trigger expressions to determine a trigger point. A trigger unit is a comparator, while the trigger expression allows combinations of trigger units and sequences.
For this SDI example, three trigger units could be used to define the EAV and SAV sequences, another trigger unit for the line number, and finally a count statement for the wait before acquiring data. An example trigger setup is shown in Figure 3. This setup can be used to look for any desired line number and pixel, since the value for the line number trigger unit and the count can be changed dynamically.
External logic analyzers will continue to be used due to their value in analyzing system level functions. But their use for internal FPGA debugging requires available connections to be designed on the board and the number of available signals can be limited. Internal logic analyzers provide significant freedom in the number of signals that can be used, but face resource constraints in triggering logic and especially in trace memory. However, careful use of triggering options can allow an internal logic analyzer to start capturing data at the exact time needed to maximize the available resources. In this example, a complex sequence needed to analyze a specific pixel (line and word) in an SDI video signal has been broken down into simple trigger elements for increased efficiency. This example is just a brief look at the use and applications of internal logic analyzers. As FPGA design complexities increase internal logic analyzers and similar tools will be of increasing value to designers for functional verification and debug.
February 12, 2008
Jun 22, 2021
Have you ever been in a situation where the run has started and you realize that you needed to add two more workers, or drop a couple of them? In such cases, you wait for the run to complete, make... [[ Click on the title to access the full blog on the Cadence Community site...
Jun 21, 2021
By James Paris Last Saturday was my son's birthday and we had many things to… The post Time is money'¦so why waste it on bad data? appeared first on Design with Calibre....
The Quest for the Most Advanced Networking SoC: Achieving Breakthrough Verification Efficiency with Cloud-Based Emulation
Jun 17, 2021
Learn how cloud-based SoC design and functional verification systems such as ZeBu Cloud accelerate networking SoC readiness across both hardware & software. The post The Quest for the Most Advanced Networking SoC: Achieving Breakthrough Verification Efficiency with Clou...
Jun 17, 2021
In today’s blog episode, we would like to introduce our newest White Paper: “System and Component qualifications of VPX solutions, Create a novel, low-cost, easy to build, high reliability test platform for VPX modules“. Over the past year, Samtec has worked...