feature article
Subscribe Now

FPGAs for the Masses?

Freeing FPGA Implementation from the Hardware Designer's Grip

Over the years, there have been many attempts to make FPGAs easier to use, and most of them now occupy the footnotes of FPGA history. So when I got a note from Stéphane Monboisset introducing me to a new FPGA design tool called QuickPlay from a company called PLDA, I was about to send a polite, “Thanks, but no thanks,” when I remembered where I had last met Stéphane. It was when Xilinx was launching Zynq, and he was very successfully handling the European aspects of the launch, including the press conversations. The fact that he had moved from Xilinx to PLDA made me take it more seriously.

The context in which QuickPlay has been developed is that, while FPGAs can offer a great way of developing products and can often provide performance and other advantages over using, say, multicore processors and GPUs, they are now complex beasts. They need experienced hardware engineers to implement the intricate designs that provide the performance advantages, and projects can take a long time from initial concept to a debugged device that is ready for delivery. QuickPlay is intended to open up the development process to software people and improve the speed and quality of implementation.

It was, for me, a little difficult to get a grasp on exactly what QuickPlay is. Let’s start with what it is not. It is not a complete tool chain for FPGAs – it relies on the synthesis and place and route tools from the device manufacturer. It is not an all-purpose tool; it presumes that you will be developing a system based on one of a (wide) range of boards. But it is a way of developing systems around an FPGA that is accessible to people who are not hardware experts.

Stepping back a bit, QuickPlay already has in place an ecosystem – the QuickAlliance programme – with partners who supply pre-validated IP cores and libraries in VHDL, Verilog, and C, a group of board partners and partners who provide tools that integrate into the QuickPlay design flow, and suppliers of design services.

The underlying model of the QuickPlay approach is based on how a software developer thinks – that a design is a number of functions that communicate with one another and also with the outside world. For the hardware implementation, these designs are considered kernels.

The design flow is to create a software-functional model, using C or C++ implementations of the kernels from the IP library or building them from scratch, and verifying the model using standard tools and methodologies. Once the model is clean, then a hardware engine is generated for an FPGA platform along with standard I/O.

Now clearly there is a great deal more going on under the hood. Let’s go back to look at the flow in a little more detail.  The QuickPlay, Eclipse-based IDE has a working space for drag and drop from libraries of functions in C or C++ or even HDL. These can be from PLDA or from the ecosystem, or they can be developed in-house. Streaming communications channels link the functions, and these can include links to memory blocks and system I/O. Parallelism is supported by merely duplicating functions.

When the system model is assembled, it can be compiled and then exercised with a test program to verify behaviour. For example, is the output from the system appropriate for the input? The IDE includes open source debugging and other tools.

Once the software model is performing according to specification, then it is mapped onto an appropriate target board with an FPGA and physical interfaces. The FPGA is implemented using the manufacturer’s tools within the IDE. The board can then be exercised with a wide range of tests, in parallel with the software model. As QuickPlay has guaranteed functional equivalence between the software model and the hardware implementation, if there are any bugs in the hardware, then they also exist in the software. There is no need for hardware debugging, since fixing the software and re-targeting to the FPGA automatically fixes the hardware.

What are the benefits? We have seen that the developer doesn’t need to know what is happening at much below the functions of the kernels. There is no need to worry about clock networks, timing, communication stacks and protocols or a raft of other issues that normally absorb significant energy of the hardware engineer. The implementation is correct by construction, so there is no need for hardware level verification and debugging.

The promise of creating hardware without hardware engineers has been a holy grail for some for many years. Every few years, there is an announcement that this has been achieved, only for the breakthrough to gradually fade away (Handel-C anyone?). With QuickPlay, we may be getting there, even if it is for only a subset of all possible applications. See for yourself in the selection of video demos at https://www.youtube.com/channel/UCM-Loe1CQ6iNgC32ndtWakA

13 thoughts on “FPGAs for the Masses?”

  1. “The promise of creating hardware without hardware engineers has been a holy grail for some for many years.”

    This is similar to the current holy grail of creating software without software engineers. The quality lowered, the software deployment size increased, applications are sistematically slower despite the big increase in computational power.

    So, honestly, I don’t think it’s the way to go, long-term. Future will tell.

    Just a small note regarding your title “FPGAs for the Masses”. I honestly was expecting something different – a radical decrease in FPGA prices from some vendor, at least for the low-volume or hobbyist market. That would be the best news around – be able to buy a decent FPGA (similar to a Xilinx Spartan6 or Altera Cyclone IV, or even newer families) for less than USD$4 a piece.

    Unfortunately, I think this is not going to happen so soon. But you can buy a GHZ ARM SoC chip for only a few cents… only sales volume explains the cost difference.

  2. Pingback: GVK Biosciences
  3. Pingback: GVK Bioscience
  4. Pingback: Kindertrampolin
  5. Pingback: kari satilir
  6. Pingback: iraqi geometry
  7. Pingback: Cheap

Leave a Reply

featured blogs
Dec 1, 2023
Why is Design for Testability (DFT) crucial for VLSI (Very Large Scale Integration) design? Keeping testability in mind when developing a chip makes it simpler to find structural flaws in the chip and make necessary design corrections before the product is shipped to users. T...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

Megawatt Chargers in Electric Commercial Vehicle Infrastructure
In order to move forward with the large-scale implementation of commercial electric vehicles, we need to consider efficiency, availability, reliability, and longevity for the mega-watt chargers required for these applications. In this episode of Chalk Talk, Dr. Martin Schulz from Littelfuse joins Amelia Dalton to discuss the infrastructure demands of electric commercial vehicles, the role that galvanic isolation plays here and why thyristors may be a great choice for the future of electric commercial vehicles.
Jan 17, 2023
37,812 views