feature article
Subscribe Now

Power Puzzle

FPGA Power Doesn't Work Like You Think

We all learned about power in EE101, right? You know, voltage times current – bam, we’re done. Easy as Ohm’s Law. Of course that was just for DC, and we had to learn how to do all that RMS stuff for signals that wiggle. Still, we left engineering school having a pretty good feeling that we had this whole power thing under control.

Fast forward a few years and we’re in industry, designing digital circuits. We learn that we can make some broad assumptions about power, and those hold true for most everything we design. We learned things like lower supply voltages mean lower power, and that the more toggles we do, the more power we burn. If we could turn off parts of our circuit when we didn’t need them, we used less power. It was the same lesson our Mom taught us about turning out the lights at home.

We applied those lessons to do things like gating our clocks, making circuits wider and lower-frequency instead of narrower and higher-frequency, sizing our transistors to be appropriate for the job they have to do, and minimizing IO activity and memory accesses. Our bag of tricks got pretty robust. We could squeeze the extra power out of just about anything we designed.

Then, we got our first FPGA. Weird. This thing doesn’t work at all like we expected. Luckily, the FPGA company provides a data sheet that tells us… diddly squat about what to expect on power consumption for our device. OK, that’s a bit of a lie. There’s something on there about the power dissipation being the second derivative of the trend in the number of lines of HDL code divided by the supply voltage times the current flow in coulombs per femtofortnight (or something like that). It seemed to make perfect sense when the FPGA applications engineer read it to us.

Fortunately again, the FPGA company gave us a nice spreadsheet thingy that would give us an estimate of our design’s power consumption – accurate to within just three or four orders of magnitude. We punched in what we knew about our design (and a few things we didn’t know, but guessed at) Voila! We got a power estimate… and a note that it exceeded the normal rating for our FPGA and we needed to do some changing. Or, maybe it didn’t and we’d be OK.

To understand FPGA power consumption, it’s helpful to realize that an FPGA is really kinda’ just a fancy memory. From there, it’s also useful to concede that dynamic power (the power our circuit uses doing useful work) may not be our biggest challenge. We may be hit hardest by static power – the power that is burned up doing pretty much nothing we want or care about. Static power is the equivalent of property tax – you have to pay it for just sitting there.

If we were to list all of the power forces affecting our FPGA design, there are two biggies: static power and dynamic power. If we want to get picky and technical about it (and of course, we do) the static power comes in two flavors. Those are the pre-programmed static power (the power the FPGA uses before your amazing bit stream comes in and programs it to solve all of the world’s problems) and programmed static power (the power the FPGA uses after you program it, but before the clocks start toggling it into useful work mode). Pre-programmed static power is the FPGA company’s problem. They need to make sure the chip doesn’t draw a bunch of current right off the bat and, for example, cause your power supply to change its mind and shut things down before you get started.

Before we get to the “programmed” static power part, the device has to be configured. That brings up a whole new power issue – the current the device draws during the frantic few moments while it’s being programmed. In the old days, this inrush current could sometimes be greater than the current draw of the FPGA during normal operation – not a good thing. With modern FPGAs, though, the FPGA companies have taken this issue off the table as well. You’re still in the clear.

Now that the device is configured and ready to run, we start to be responsible for more of our own destiny. The “programmed” static power is something we have to deal with. You may ask “Why is static power a problem on my FPGA when it isn’t on my ASIC or other logic design?” Well, you know how an FPGA requires quite a few more transistors (like 7x-10x) to accomplish the same logic function as, say, a standard cell version? Most of those transistors are used to accomplish the “programmable” part of programmable logic. These configuration logic transistors are just sitting there leaking current the whole time your FPGA is configured – even if they’re not doing any of the actual active work. 10x the transistors equals 10x the leakage. Worse yet, all those transistors you’re not using at all on your FPGA – they’re leaking too. Trust us, static power is a problem in FPGAs.

Finally, we also get to think a bit about the kind of power that we actually understand and over which we have some modicum of rational control… dynamic power. This is power consumption like Mom used to make – good old fashioned faster-is-hotter down-home juice burning. Some of our old power saving tricks will even work here – but don’t expect to get anywhere near as fancy with things like clock gating, power islands, and the like on an FPGA design as you can on other technologies. Most of the clock circuits on FPGAs are already there before you start and can’t be radically altered by anything you do in your design. Remember that we said static power is a problem, though? In some designs, dynamic power may actually be number two on the list of power hogs.

When your FPGA goes out into the real world to do real work, it is usually sitting somewhere warmer than your lab bench. This can be a problem. As the device gets warmer, junction temperatures rise. As junction temperatures rise, leakage increases. As leakage increases, power consumption rises. As people who paid attention during thermo class know, that means temperature also rises. Does anyone see a problem with this vicious circle? Yep. It is possible to induce thermal runaway in an FPGA. Eventually, your transistors will stop transisting and you’ll be pumping the output of your power supply through a rapidly-melting blob of microscopic metal – cool if you were trying to design a smoke machine – bad if you were hoping for some other outcome.

It may be that your FPGA design will require fans, heat sinks, or other thermal mitigation measures. The only way to know is to do a power analysis of your design on your FPGA with your expected operating conditions taken into account.

Now, a word of warning is appropriate here. FPGA companies all supply power estimators as part of their design tool suites. Often, there are two tools – one that estimates power up front based on some broad parameters you supply, and one that does a “more accurate” post-design audit. The early estimator is intoxicatingly simple. The other is often a complex tool that does some analysis of your actual design, takes a lot more work on your part, and gives a more accurate answer. 

Here’s the problem – FPGA companies know that many people will use the “early estimate” tool to compare devices from various vendors. They don’t want you to pick a competitors’ device based on an early estimate of power consumption. Usually, you’ll see a whole bunch of fine print and caveats about how power estimation is an inexact science and your mileage may vary and that the resulting estimate is generally accurate to within plus or minus some big number. 

Let’s be clear here. That “plus or minus” stuff – will pretty much NEVER be minus. The FPGA tool designer who writes a program that gives pessimistic power estimates to potential customers will soon be pursuing other personal career options. The two estimate tools will generally play a game of “good cop / bad cop” with you. The good cop tells you everything is going to be just fine, and you should choose Vendor Y’s FPGA for your project. The bad cop tells you that you must have screwed up something in your design, because the power is much worse now and you need to mitigate and perhaps buy some expensive heat sinks and fans. The bad cop has to be bad because they don’t want to tell you your design will be fine and then have you calling their support line saying you’ve got a bunch of blue smoke leaking out.

A final thing to consider about power consumption is that power often varies a lot depending even on the data being consumed by the system. If you’re simulating and verifying based on some vectors that keep your circuit in a mode that doesn’t reflect real-world stimulus, you can sometimes end up with surprises when you build the real thing “Oh, we didn’t realize that power consumption doubled when all the pixels are black!” Depending on your application, you should consider the effect of the data you’re processing on power consumption and look for potential pathological cases. 

Overall, compared with ASIC design, there are fewer things you can do to reduce power in your FPGA. The set of optimizations is much smaller, and the effect of the optimizations you do moves the needle less because of the dramatically increased contribution of static power. In every case, FPGA designs will consume more power than a corresponding ASIC design. This doesn’t mean that FPGAs are bad for system power, however. In many cases (particularly where FPGAs are being used to accelerate algorithms like signal processing that might otherwise be done in software) FPGAs are dramatically more power efficient than their software-programmable brethren. Even though power is harder to estimate, control and mitigate in an FPGA, an FPGA can be a power panacea for your overall system design. 

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Intel AI Update
Sponsored by Mouser Electronics and Intel
In this episode of Chalk Talk, Amelia Dalton and Peter Tea from Intel explore how Intel is making AI implementation easier than ever before. They examine the typical workflows involved in artificial intelligence designs, the benefits that Intel’s scalable Xeon processor brings to AI projects, and how you can take advantage of the Intel AI ecosystem to further innovation in your next design.
Oct 6, 2023
25,119 views