feature article
Subscribe Now

FPGAs for the Masses?

Freeing FPGA Implementation from the Hardware Designer's Grip

Over the years, there have been many attempts to make FPGAs easier to use, and most of them now occupy the footnotes of FPGA history. So when I got a note from Stéphane Monboisset introducing me to a new FPGA design tool called QuickPlay from a company called PLDA, I was about to send a polite, “Thanks, but no thanks,” when I remembered where I had last met Stéphane. It was when Xilinx was launching Zynq, and he was very successfully handling the European aspects of the launch, including the press conversations. The fact that he had moved from Xilinx to PLDA made me take it more seriously.

The context in which QuickPlay has been developed is that, while FPGAs can offer a great way of developing products and can often provide performance and other advantages over using, say, multicore processors and GPUs, they are now complex beasts. They need experienced hardware engineers to implement the intricate designs that provide the performance advantages, and projects can take a long time from initial concept to a debugged device that is ready for delivery. QuickPlay is intended to open up the development process to software people and improve the speed and quality of implementation.

It was, for me, a little difficult to get a grasp on exactly what QuickPlay is. Let’s start with what it is not. It is not a complete tool chain for FPGAs – it relies on the synthesis and place and route tools from the device manufacturer. It is not an all-purpose tool; it presumes that you will be developing a system based on one of a (wide) range of boards. But it is a way of developing systems around an FPGA that is accessible to people who are not hardware experts.

Stepping back a bit, QuickPlay already has in place an ecosystem – the QuickAlliance programme – with partners who supply pre-validated IP cores and libraries in VHDL, Verilog, and C, a group of board partners and partners who provide tools that integrate into the QuickPlay design flow, and suppliers of design services.

The underlying model of the QuickPlay approach is based on how a software developer thinks – that a design is a number of functions that communicate with one another and also with the outside world. For the hardware implementation, these designs are considered kernels.

The design flow is to create a software-functional model, using C or C++ implementations of the kernels from the IP library or building them from scratch, and verifying the model using standard tools and methodologies. Once the model is clean, then a hardware engine is generated for an FPGA platform along with standard I/O.

Now clearly there is a great deal more going on under the hood. Let’s go back to look at the flow in a little more detail.  The QuickPlay, Eclipse-based IDE has a working space for drag and drop from libraries of functions in C or C++ or even HDL. These can be from PLDA or from the ecosystem, or they can be developed in-house. Streaming communications channels link the functions, and these can include links to memory blocks and system I/O. Parallelism is supported by merely duplicating functions.

When the system model is assembled, it can be compiled and then exercised with a test program to verify behaviour. For example, is the output from the system appropriate for the input? The IDE includes open source debugging and other tools.

Once the software model is performing according to specification, then it is mapped onto an appropriate target board with an FPGA and physical interfaces. The FPGA is implemented using the manufacturer’s tools within the IDE. The board can then be exercised with a wide range of tests, in parallel with the software model. As QuickPlay has guaranteed functional equivalence between the software model and the hardware implementation, if there are any bugs in the hardware, then they also exist in the software. There is no need for hardware debugging, since fixing the software and re-targeting to the FPGA automatically fixes the hardware.

What are the benefits? We have seen that the developer doesn’t need to know what is happening at much below the functions of the kernels. There is no need to worry about clock networks, timing, communication stacks and protocols or a raft of other issues that normally absorb significant energy of the hardware engineer. The implementation is correct by construction, so there is no need for hardware level verification and debugging.

The promise of creating hardware without hardware engineers has been a holy grail for some for many years. Every few years, there is an announcement that this has been achieved, only for the breakthrough to gradually fade away (Handel-C anyone?). With QuickPlay, we may be getting there, even if it is for only a subset of all possible applications. See for yourself in the selection of video demos at https://www.youtube.com/channel/UCM-Loe1CQ6iNgC32ndtWakA

13 thoughts on “FPGAs for the Masses?”

  1. “The promise of creating hardware without hardware engineers has been a holy grail for some for many years.”

    This is similar to the current holy grail of creating software without software engineers. The quality lowered, the software deployment size increased, applications are sistematically slower despite the big increase in computational power.

    So, honestly, I don’t think it’s the way to go, long-term. Future will tell.

    Just a small note regarding your title “FPGAs for the Masses”. I honestly was expecting something different – a radical decrease in FPGA prices from some vendor, at least for the low-volume or hobbyist market. That would be the best news around – be able to buy a decent FPGA (similar to a Xilinx Spartan6 or Altera Cyclone IV, or even newer families) for less than USD$4 a piece.

    Unfortunately, I think this is not going to happen so soon. But you can buy a GHZ ARM SoC chip for only a few cents… only sales volume explains the cost difference.

  2. Pingback: GVK Biosciences
  3. Pingback: GVK Bioscience
  4. Pingback: Kindertrampolin
  5. Pingback: kari satilir
  6. Pingback: iraqi geometry
  7. Pingback: Cheap

Leave a Reply

featured blogs
Apr 17, 2024
The semiconductor industry thrives on innovation, and at the heart of this progress lies Electronic Design Automation (EDA). EDA tools allow engineers to design and evaluate chips, before manufacturing, a data-intensive process. It would not be wrong to say that data is the l...
Apr 16, 2024
Learn what IR Drop is, explore the chip design tools and techniques involved in power network analysis, and see how it accelerates the IC design flow.The post Leveraging Early Power Network Analysis to Accelerate Chip Design appeared first on Chip Design....
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Dependable Power Distribution: Supporting Fail Operational and Highly Available Systems
Sponsored by Infineon
Megatrends in automotive designs have heavily influenced the requirements needed for vehicle architectures and power distribution systems. In this episode of Chalk Talk, Amelia Dalton and Robert Pizuti from Infineon investigate the trends and new use cases required for dependable power systems and how Infineon is advancing innovation in automotive designs with their EiceDRIVER and PROFET devices.
Dec 7, 2023
17,093 views