editor's blog
Subscribe Now

Validating Serial Protocols

When I was approached to talk about a new product from Arasan, I ran afoul of my favorite source of confusion from the Bureau of Arbitrary Definitions: I thought it was a verification story, when in fact it’s a validation story.

In case you think those two sound like pretty much the same thing, I always like to reinforce the confusion by defining verification as the act of proving that your design is a valid implementation of the design spec, while validation is the act of verifying that your design works properly in its desired setting. (Confused more? You’re welcome.)

Most of what we discuss in these pages is verification – making sure that there are no implementation deviations from the design spec. We spend much less time on validation, where the design is put into its native operating environment to see if it works as intended. This is often the domain of emulators, where you can connect in real system components or drive in real data traffic at speed to see how things work.

Arasan notes that this is becoming a problem with protocol stacks that communicate across gigabit (and higher) serial links – the emulators can’t keep up. They might be able to handle the physical layer (for instance, if there are FPGAs in the emulator, it’s likely they can handle serial data), but even so, the higher level portions of the protocol stack will be emulated, meaning they can’t run at speed.

They say that the standard solution for this is to place a rate-matching unit between the emulator and whatever is being used to validate the design. But because the emulator is slow, you end up waiting a lot – it certainly doesn’t reflect what a real traffic pattern would look like. In addition, apparently not all protocols have a rate-matching solution: MIPI, for example, can’t be handled that way.

So Arasan has released what is essentially a small tester that can be connected to a high-speed prototype board to test the design without an intervening rate matcher. It has a processor that handles the top levels of the protocol stack; that’s then shipped over to an FPGA that handles the bottom 4 levels of the stack and drives out directly to the design you’re testing.

They have different connectivity boards that can be swapped out for different protocols. Yes, in theory you could combine them, but they say that it’s rare that a design team needs to validate more than one protocol: a specific connection will operate on only one.

You can find more information in their press release

Leave a Reply

featured blogs
Apr 14, 2021
Hybrid Cloud architecture enables innovation in AI chip design; learn how our partnership with IBM combines the best in EDA & HPC to improve AI performance. The post Synopsys and IBM Research: Driving Real Progress in Large-Scale AI Silicon and Implementing a Hybrid Clou...
Apr 13, 2021
The human brain is very good at understanding the world around us.  An everyday example can be found when driving a car.  An experienced driver will be able to judge how large their car is, and how close they can approach an obstacle.  The driver does not need ...
Apr 13, 2021
If a picture is worth a thousand words, a video tells you the entire story. Cadence's subsystem SoC silicon for PCI Express (PCIe) 5.0 demo video shows you how we put together the latest... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Apr 12, 2021
The Semiconductor Ecosystem- It is the definition of '€œHigh Tech'€, but it isn'€™t just about… The post Calibre and the Semiconductor Ecosystem appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

Featured Chalk Talk

TensorFlow to RTL with High-Level Synthesis

Sponsored by Cadence Design Systems

Bridging the gap from the AI and data science world to the RTL and hardware design world can be challenging. High-level synthesis (HLS) can provide a mechanism to get from AI frameworks like TensorFlow into synthesizable RTL, enabling the development of high-performance inference architectures. In this episode of Chalk Talk, Amelia Dalton chats with Dave Apte of Cadence Design Systems about doing AI design with HLS.

More information