feature article
Subscribe Now

11 Reasons You Should NOT use an FPGA for a Design, and Four Reasons You Should

"I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." – Abraham Maslow

We write a lot about FPGAs here at EEJournal, with good reason, and you might get the impression that they’re the right solution to every design problem. They’re not.

Here’s a checklist to help keep you in the right path to successful design when considering FPGAs as a design alternative:

  1. If you just need to blink an LED, use something else. Often, your first introduction to FPGA design is the blinking light project. In real life, some designs don’t need to do much more than blink an LED but it’s not a proper job for an FPGA. You can buy microcontrollers for pennies that can accomplish this task with far less hassle. It was once fashionable to use a 555 timer IC to blink LEDs. That was back in the 1970s just after Signetics announced the 555. In terms of part cost, it’s now cheaper to blink the LED with a microcontroller than a 555.
  2. If a microcontroller can do the job, use a microcontroller. The advantage of a microcontroller is that it’s already designed and tested. The hardware is already known to work. So if there’s a microcontroller, any microcontroller, with the right hardware configuration for your project, then use that instead of an FPGA.
  3. If a Raspberry Pi can do the job, use a Raspberry Pi. The Raspberry Pi organization has done the embedded community a huge favor by turning out a series of very capable and very cheap processor boards, ranging from the Raspberry Pi Zero – which you can get for as little as $5 – to a Raspberry Pi 4 – which can cost as much as $75. At the high end, you’re getting an embedded board with a quad, 64-bit processor and myriad interfaces including dual-band WiFi and a camera interface. There’s extensive community support for Raspberry Pi development as well. If you think a Raspberry Pi isn’t serious hardware, consider this: I recently had lunch with a friend. His company is making gas chromatography equipment. They use a Raspberry Pi 4 as a controller because it’s cheap, it does the job, and someone else has already designed and debugged the board. Be like my friend’s company, if you can.
  4. Don’t you have better uses for your time? Compiling a design for a large FPGA still takes hours, and achieving timing closure can take days or even weeks for a tricky, high-speed FPGA design. Don’t you have better uses for your time? If you pick an existing microcontroller or ASSP, all of that design closure stuff has already happened long ago.
  5. If power consumption is important, don’t use an FPGA. The FPGA vendors love to tout the low-power aspects of their parts. Careful! Ask them, “Compared to what?” Capable FPGAs, in general, are NOT low-power devices. Want proof? Take a look at the heat sinks on those FPGA boards. You don’t see many microcontrollers sporting heat sinks, or fan sinks for that matter. When FPGA vendors say their devices consume less power than CPUs or GPUs, they’re talking about devices that dissipate on the order of 100 watts. You want an actual low-power alternative? Choose something else.
  6. Don’t care about latency? If you don’t care whether your system’s latency is measured in nanoseconds or microseconds, then you don’t need an FPGA.
  7. If you don’t know Verilog or VHDL, don’t use an FPGA. Sure, the FPGA vendors are all trying to grow their market by creating bridge compilers that transform C and C++ code into Verilog or VHDL. These tools are like the automated language translation tools that transform English into Hungarian or Urdu. It’s amazing that these tools work at all, but there’s a lot of nuance lost in the translation. In the case of FPGAs, you need to think parallel to get the full advantage of an FPGA’s massive parallel hardware architecture. If you need to perform thousands of multiply/add operations in one clock cycle, you can with an FPGA. However, C and C++ are not formulated to allow you to express parallel operations easily because they’re designed to create object code for sequential machines – namely microprocessors.
  8. Want to write code in Python or some other slow-boat interpreted language? Don’t use an FPGA. Please don’t think you’re going to get any sort of bare-metal performance from an FPGA if you write your code in an interpreted language like Python. Interpreted languages are designed purely for ease of use and are inherently slow. Sure, Xilinx offers a Python-programmable board called Pynq. It’s a great learning tool. It’s just not a performance tool.
  9. If you’re pinching pennies, don’t use an FPGA. For some applications, performance rules over cost and power consumption. For other applications, pinching pennies is the prime goal. Including an FPGA on your bill of materials will not help to pinch pennies. In general, FPGAs cost a lot more than microcontrollers.
  10. If you don’t want a lot of power supplies on your board, don’t use an FPGA. For some strange reason, FPGAs need a lot of power supplies – for the core voltage, for I/O voltages, for memory and memory-backup power, and so on. If you look at an FPGA board, you’ll see a lot of on-board regulators to create all of these various voltages just to make the FPGA happy. Before it was bought by Intel, Altera actually bought a power-supply module company called Enpirion. That ought to tell you how important power supplies are to FPGAs. Enpirion makes very cool products, but power supplies are a means to an end for most design engineers and not the main design goal.
  11. If you know your design will go into high-volume manufacturing, don’t target an FPGA. High-volume products (think millions of units) are the domain of ASICs, or structured ASICs if you’re in some mid-volume gray area. It’s fine to prototype with FPGAs for such product designs, but you want to jump to a custom device as quickly as possible because FPGAs are off by an order of magnitude or more when it comes to the three “P”s: performance, power, and price.

If you’ve just run that gauntlet of reasons you should not use an FPGA and still think you should, then you probably should.

  1. If your computational performance requirements cannot be met by running software in a processor, then you should consider an FPGA as a design choice.
  2. If you need significant amounts of high-speed I/O in the form of Gigabit Ethernet or multiple multi-lane PCIe ports, then you should consider an FPGA as a design choice.
  3. If you need to perform significant amounts of high-speed DSP, FPGAs should be your first choice.
  4. If you already have proficiency in Verilog or VHDL, then you should not hesitate to consider FPGAs as a design choice.

Do you have any advice to add to these lists? If so, please feel free to dispense that advice in a comment below.

 

Postscript: So many experienced FPGA designers have weighed in on this article with special cases where these rules of thumb don’t apply, that I am compelled to quote Picasso: “Learn the rules like a pro, so you can break them like an artist.”

Postscript #2: If time to market is the most important factor for your project, then an FPGA will get you to the finish line first. So will an off-the-shelf pcb assembly that’s already tested and debugged.

19 thoughts on “11 Reasons You Should NOT use an FPGA for a Design, and Four Reasons You Should”

  1. Positive reason number 5: You cannot buy a CPU, GPU or uP with the precise combination of I/O peripherals that your design requires. Thus you build your ideal I/O platform into which you pour your DSP, AI, Hard/Soft CPU sub-systems.

    Positive reason number 6: The standards and interfaces that your industry uses, keep evolving at a pace no regular solution can keep up with. Only an FPGA can capture today’s system requirements with the capacity to adapt to tomorrow’s requirements when they appear….

  2. The last point “If you know your design will go into high-volume manufacturing, don’t target an FPGA” I did not really understand. Is it possible to rephrase?

      1. that needs a low latency DSP and need to be easily reconfigurable to adjust to future protocol changes?

        (UX on a mobile is awful here, very easy to fat-finger “post comment” while typing).

    1. If you know you’ll be producing products in the millions, you should be thinking ASIC from the start because the ASIC unit cost will be much lower than the FPGA. However, there’s a high NRE cost on the front end of the product’s life to design and fabricate that ASIC. You need to make the end product in high volumes to amortize the NRE. Otherwise, the high NRE costs will overcome the lower unit cost of the ASIC. Many companies have targeted an FPGA knowing they’ll switch to an ASIC if they start shipping high volumes of end product. Intel offers a structured ASIC, an eASIC, for just such situations. The NRE cost of an eASIC is lower than an ASIC, but the unit cost for the eASIC is higher. It’s nice to have choices. Choose wisely.

  3. Not really fair on Pynq. It is built for FPGA SoCs, allowing to talk to your AXI IP cores from Python using standard DMA drivers. You still write your cores in HDLs or at least generate with an HLS, there is no Python anywhere on the FPGA side of the SoC.

    Also, not really true on a cost. Lattice makes a lot of very cheap parts that easily compete with MCUs in price.

    1. FPGAs are for performance. Python is not. The “P” in “Pynq” stands for “Python.” Pynq is a superlative learning tool for dipping your toe into FPGAs. That’s why it was created in the first place. It is not intended for bare-metal FPGA programming (although you can do that on a Pynq board if you wish). As a Pynq enthusiast, I would never be unfair to the product, nor will I pretend it is something that it is not.

      1. Python in Pynq is for glueing things together and for testing your IP cores interactively. I don’t see how it harms performance in those particular use cases (saying as someone who’d never use python for anything at all, but still like the design of Pynq).

        1. Hi, metaprog. Would you consider using C# and Visual Studio for “gluing things together and testing your IP”?
          Because you can define classes/objects at whatever level of detail you want, compile and run in debug mode. I do it and it works great.
          The are also delegates that you can use to evaluate Boolean and arithmetic expressions so you can model your IP described at any level of detail you want. I believe that it would not be too hard to define a soft “Pynq” to use for IP development.
          I will explore more because I am logic designer for many years and think that it was a terrible mistake when HDL/Verilog was chosen for design entry simply because it could be simulated.
          Verilog is a hardware description language. So the tool chain synthesizes the (incomplete)design before waveforms can be created.
          I want to enter the dataflow and control logic and use an IDE and compiler to connect the blocks long before even thinking about synthesis. And I remember that a co-worker once said “real logic designers do not use FSMs”. Today FSM’s are considered essential. Because HDL is a description rather than a design language.

          1. Python in Pynq is used not as a replacement for HDLs, but as an easy tool for talking to AXI IPs from a PS side of your SoC. It is reading .hwh files and infering how to access your IP cores registers from there. You still have to design those cores using HDLs or at least an HLS.

          2. metaprog wrote ” You still have to design those cores using HDLs or at least an HLS.”
            Indeed you do! And to me, that is a major problem.

            A system is a collection of interconnected blocks that communicate using interfaces that have control signals and usually data that is moved between blocks using a particular protocol. A protocol is usually called “handshaking” which includes choosing which connected block is to receive input data and the control signals that ensure that the data is successfully transferred.

            The hand shaking signals (inputs/outputs) occur at random times relative to the clock signal and the time interval from an output to the associated input is not defined. And things like the always block are triggered by the local clock signal. Yes it is hard to define the sensitivity list as well as the conditions for the particular time while things are changing at random times.

            Also HDLs do not do inter block connections. A module can be instantiated, but manually connected.

            The purpose that HDLs were created is synthesis, NOT DESIGN. So Stevens’s original quote
            paraphrased: “If the only thing you have is an HDL, then everything should be synthesized”

            The pitiful part is that there is a compiler and IDE that could be used to make things easier but still not a piece of cake. Logic design is still required.

    2. In China, you can buy microcontrollers for three cents or less per chip. Is that really what Lattice is getting for it’s least expensive FPGAs these days? If you’re comparing a high-end MCU with a low-end FPGA, then consider the functions you’re getting from both, for the price. Finally, all rules of thumb have exceptions.

    1. The disappearance of a power supply module is regrettable, but there are plenty of alternatives. That’s good because FPGAs need plenty of power supply alternatives.

    1. Those two statements do seem in conflict, Karl Stevens. “FPGA is off by an order of magnitude in performance” is with respect to an ASIC. FPGAs exhibit superior performance when compared with processors running software. That’s clearly stated in the parts of those sentences that you didn’t quote, but I hope that clears things up for you.

  4. Reason #8 Want to write code in Python? Go ahead … BUT Python is a programming language that runs on computers by definition. It is a programming language, FPGAs do not run programs.

    EXCEPT a design can be implemented on an FPGA that runs programs.
    There are embedded processors that run programs, but performance is poor mainly because instructions and data are in off chip memory that has long access time. (load, add, store, branch, etc.)

    USE EMBEDDED MEMORY BLOCKS. Use separate memories for instructions and data for fast parallel access .

    If anyone is interested, I have an open source project on GIT. CEngine has a running demo and I am working on an a C#AST based update. (I forgot the name, but my ID is KarlS).

  5. Here is a pdf that led to Project Catapult that put FPGAs in the Microsoft Data Centers.

    It does not seem to fit the reasoning in this article. Because the performance gain justifies the cost and adds programmability to the FPGA in spite of all the stated reasons to not use an FPGA.

    Importantly, the difficulty of using an HDL for design entry is a real problem.

    Here’s the link:

    https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-2008-130.pdf

    The irony is that the MS Roslyn Compiler API can extract the control flow and expressions that can then be used to format HDL to use for build. (that is what HDL is for anyway)

Leave a Reply

featured blogs
Mar 18, 2024
Innovation in the AI and supercomputing domains is proceeding at a rapid pace, with each new advancement heralding a future more tightly interwoven with the threads of intelligence and computation. Cadence, with the release of its Millennium Platform, co-optimized with NVIDIA...
Mar 18, 2024
Cloud-based EDA tools are critical to accelerating AI chip design and verification; see how NeuReality leveraged cloud-based chip emulation for their 7NR1 NAPU.The post NeuReality Accelerates 7nm AI Chip Tape-Out with Cloud-Based Emulation appeared first on Chip Design....
Mar 5, 2024
Those clever chaps and chapesses at SiTime recently posted a blog: "Decoding Time: Why Leap Years Are Essential for Precision"...

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured paper

Reduce 3D IC design complexity with early package assembly verification

Sponsored by Siemens Digital Industries Software

Uncover the unique challenges, along with the latest Calibre verification solutions, for 3D IC design in this new technical paper. As 2.5D and 3D ICs redefine the possibilities of semiconductor design, discover how Siemens is leading the way in verifying complex multi-dimensional systems, while shifting verification left to do so earlier in the design process.

Click here to read more

featured chalk talk

AI/ML System Architecture Connectivity Solutions
Sponsored by Mouser Electronics and Samtec
In this episode of Chalk Talk, Amelia Dalton and Matthew Burns from Samtec investigate a variety of crucial design considerations for AI and ML designs, the role that AI chipsets play in the development of these systems, and why the right connectivity solution can make all the difference when it comes to your machine learning or artificial intelligence design.
Oct 23, 2023
19,270 views