feature article
Subscribe Now

GateRocket Blasts Off

FPGAs Verifying FPGAs

The system is both elegant and enigmatic.

When visitors see the RocketDrive sitting on your lab bench (particularly if it is plugged into the handsome show-floor-worthy box currently making the rounds at trade shows), your “cool factor” will definitely creep up a notch or two. When you use it to help you knock bugs out of your next FPGA design, you’ll most likely be pleased with your purchase. GateRocket’s RocketDrive is a useful tool for FPGA designers.

You have to be careful, though, not to think about it too hard.

You see, if you’ve been doing FPGA design for awhile, you probably have first-hand experience with the history of FPGA de-bugging and “verification” methodologies.  If you’ve been reading here for awhile, you also probably know why “verification” is in quotes.  In our world, verification is a process for vetting your design before it goes over the wall to manufacturing and tooling.  The very idea of verification is to make sure that everything is great and straight before crossing the one-way barrier from the domain of flexible, iterative design to the world of irreversible investment in expensive masks and physical inventory.  In “measure twice, cut once,” verification is the second “measure” – giving us peace of mind that all is well before we commit our concept to materials.

The whole point of FPGA design is that it doesn’t work that way.  FPGAs give us Mulligans for Life. We can change our hardware design anytime from the initial concept phase all the way through to working systems in the field.  Unlike ASIC and even board design, there is (theoretically) never a point where a well-used FPGA must have its bitstream set in stone.  

It would be nice, however, to know that our design works correctly before we ship it to customers – particularly in safety-critical applications, but also in employment-critical situations like the one that could occur when your boss figures out that your design error is the one that has the support switchboards lighting up like New York City at dusk.  So, regardless of our sloppy use of the term “verification,” we need the ability to make sure our design (or our design revision) works as expected before it leaves our sphere of influence.

In the early days of FPGA, we simply spun the design using our development board.  We made a change to our schematic, hit the place-and-route button (then fixed a couple of stupid mistakes, rebooted our laptop, hit the place-and-route button again), zipped up the bitstream and pushed it down to the development board (then sat looking confused for awhile, plugged in the development board, pushed the bitstream again, cursed, rebooted our laptop, pushed the bitstream again…), and found out that our design did not work in hardware.  15 minutes later, we actually connected the output signal to the output buffer, re-ran the process above (sans cursing) and found out that our design showed signs of life, but still did not work in hardware.  23 iterations of this process later – at the end of the second day — things seemed pretty good with our FPGA design, and we were ready to move on to the next step in our project.

FPGAs got bigger and more complicated, however, and we moved on to HDL-based design.  Now, we had to add synthesis to our process, and place-and-route times scooted upward despite the steady march of increased computing power.  Design iterations went from minutes to hours.  Our two-day debug scenario crept toward two weeks.  It began to be less obvious what was wrong when we powered up our development board and got stone-cold darkness.  We needed visibility. We needed insight.  We needed (sigh) to simulate.

EDA vendors were happy to oblige.  Millions of our design budget dollars were directed toward the ModelSims and Aldecs of the world, and thousands of simulation licenses landed on our laptops. EDA marketing rhetoric spread like wildfire.  If we failed to abandon our old “burn-and-pray” method of catching FPGA bugs in hardware and refused to embrace the new age of software-based simulation splendor, we were labeled “logic-design luddites” and forced to drink our break-room coffee from the “old-timer’s” drip machine instead of the new robotic barista that whipped out double-skinny mochaccinos for all the hip engineers.   Debugging in actual FPGA hardware was lame.  Simulation was suave and sophisticated.  We tasted the Kool-Aid. We added more sugar. We tasted it again.  HDL simulation for FPGAs was sweet.

FPGAs trundled on, however, growing bigger and bigger.  Designs became so complicated that HDL simulation actually became more true necessity than PowerPoint panache.  Drop 100K LUTs worth of unproven logic into a development board and hit the reset button.  The chances that anything good will happen are near zero.  The chances that you can then guess what’s wrong – even smaller.  The visibility of HDL simulation was our best hope for getting the design right before we kicked off the overnight run of synthesis, place-and-route, timing optimization, more synthesis, more place-and-route, and finally blasting a candidate bitstream down to our board for the next real-hardware iteration.

We began to rely more on simulation, and our methods became more sophisticated.  For complex designs, we needed a robust testbench to put our designs through their paces. Our de-bugging methods started to bear a strong resemblance to ASIC verification processes.  We’d alter our HDL code, load it into the simulation environment, and run off for coffee (the good kind) while our computing clusters cranked through bazillions of simulated clock-cycles looking for flaws.  More iteration now went from HDL editor to simulator and back instead of looping through synthesis, place and route, and prototype programming.  Our transition from hardware de-bug to software-based virtuality was virtually complete.  

Until something else went wrong.

Today, the high-end FPGAs are so large that software simulation is becoming unwieldy.  Running a reasonable vector set through a repeatable testbench takes a long time even with significant computing power.  We’re back in the trap of long-loop iteration again…

GateRocket comes in with a back-to-the-future solution:  “Why don’t we accelerate our HDL simulation of large FPGA designs with FPGA-based hardware accelerators?”

Wait just a second here.  We’re using an FPGA to accelerate the software simulation of – itself?

Strange as it sounds, the solution makes a lot of sense.  We are not just falling back to the days of burn-and-pray.  The difference here is that the FPGA hardware (yes, RocketDrive does walk, quack, and smell a lot like a development board) is essentially under the control of your software simulation environment.  The testbench that you use for HDL simulation seamlessly passes to the FPGA in your RocketDrive, and you get the best of both worlds – hardware execution speed with software visibility, control, and iteration times.  GateRocket says the RocketDrive “plugs into a standard disk drive slot in your PC and is available in several configurations, each containing the largest FPGA in its respective device family from Xilinx or Altera.”

Despite our skepticism, RocketDrive is not just an overpriced development board.  The tight integration with your HDL simulation environment has significant utility if you depend on a structured, organized methodology for FPGA design verification.  You could probably accomplish something similar with a bunch of custom coding, an FPGA development board, and the services of embedded-hardware logic analyzers like Xilinx’s ChipScope or Altera’s SignalTap.  Of course, then you’d be developing GateRocket’s product again from scratch.  Really, what’s the point of that?  Just buy it.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

PolarFire® SoC FPGAs: Integrate Linux® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
23,058 views