feature article
Subscribe Now

Plunify – The Big FPGA Guns

AI Your FPGA

Eyes scan the optimization options like a master pool player about to run the table against a hapless opponent. Solids and stripes form a map – a complex mathematical model where LUTs and connections melt away and a unique strategy emerges. Quick optimization to get the lay of the land, 7 ball in the side pocket, multiple runs for timing, park the cue ball left to set up for the 5 in the corner, check power and Fmax, defensive move behind the 8 ball. The physics of the situation evolve, and the expert plays them like an instrument.

Many of us wear our FPGA expertise like badges of honor. We have spent decades learning the ins and outs – the quick tricks to move the design forward – always mindful of our adversary, the ghost in the machine who strives silently to give us something to push against – to bring out our best. It’s a great feeling when a defiant web of nets and nodes yields to our power and crystallizes into a unique and perfect SRAM snowflake where power, performance, timing, and functionality all coexist in an elusive arrangement of layout and logic.

Then, sometimes, we need to just get it done.

There are projects where we find a 7-digit number of LUTs looming over our heads without enough CPU cycles available to conquer them. Confounded by conflicting constraints and with an impossible schedule deficit weighing on our team, our egos take a back seat to practical reality. This is not the time to don our superhero cape. One of the hallmarks of a true expert is knowing one’s limits and recognizing when expertise isn’t enough. That’s when we need the job to just be done. That’s when we reluctantly reach for the big red handle and pull.

That’s when we need Plunify.

We’ve talked about Plunify before. Plunify began life as a provider of FPGA tools in the cloud. As most of us know, selling the idea of any design tool in the cloud has historically been an uphill struggle, as most design teams guard their IP jealously, and uploading hard-earned designs into distant machines is a scary proposition despite the utmost assurances of rock-solid security. Then, Plunify pivoted. They began to build an AI engine for FPGA optimization. Their machine-learning algorithms burned zillions of machine cycles running countless designs through FPGA tools and evaluating the results. Over time, those algorithms evolved recipes for success that – given access to enough compute power – could consistently beat the best human FPGA experts at the game of design closure.

Plunify then offered their AI FPGA optimization engine and recipes to companies who wanted to use their own large compute farms to crank out steady streams of super-challenging FPGA designs. Plunify earned their stripes and created a solid niche business giving some of the biggest users of FPGAs a secret weapon for improving time-to-market and for squeezing the last drop of performance and capacity out of big FPGAs – all within the secure walls of their own data centers. Plunify’s “InTime” mastered the reins of Intel/Altera’s Quartus and XIlinx’s Vivado and ISE tools, claiming performance gains in the 20% range simply by letting machine learning and massive compute power work their magic on the standard FPGA tools, finding the elusive combination of options that yield a solution that meets the critical constraints on challenging designs.

Of course, Plunify didn’t want to be a “secret” weapon. They wanted to build a big market for their technology, helping the whole FPGA world to get more oomph out of FPGAs with less headache and uncertainty. So, while Plunify’s most successful customers kept their wins on the DL, the company began work on a plan to bring the power of their AI FPGA wizardry to the masses. Recently, they have “as a service-d” InTime – rolling out a new offering called Plunify InTime Service.

With InTime Service, you use Plunify as your design closure consultant. You give them your design and your constraints, and they unleash the dragons with a giant compute farm aiming the InTime recipes at your problem. And, in the best “show me the money” tradition of confidence, you pay only if Plunify is able to hit your numbers. Now, the big red handle is available to any design team – not just those with the resources for a dedicated in-house data center for FPGA optimization. Even if you are a certified black-belt FPGA expert, your time is probably better spent getting your project finished rather than spending weeks tweaking options and manually iterating your design in order to maybe – or maybe not – ultimately satisfy your constraints.

Why, you may ask, is InTime able to get better results than what we all get just running the FPGA tools according to the FPGA companies’ instructions? After all, those FPGA companies have enormous teams of engineers working day and night to get the best results from their tools so that their chips will outperform those of their competitors. This is a complex question worthy of some consideration. First, FPGA companies must build tools to work on a lowest-common-denominator hardware configuration. If they were to try to squeak the absolute most capability out of them by optimizing them for big, fast, parallel compute environments, they’d leave a lot of less-well-funded customers behind. Any dependence they put on high-end hardware configurations must be balanced with agility on the smallest, weakest compute platforms they support.

Similarly, FPGA tools get better results with more processing. When FPGA companies tune their “just push the button” mode, they have to carefully consider how long the average user is willing to wait for results. Sometimes we want quick-and-dirty answers, and sometimes we want to squeeze out every last drop of performance. In order to facilitate that, FPGA companies have to give us reasonable controls on the tools, but without making them overly complicated. That means that the default can never be the “best possible answer” mode. For that, we need to exercise the almost innumerable tuning and optimization options in the tools – and THAT is where expert users come in.

Because there are so many “expert mode” options in FPGA tools, it pays to have an expert driving them. There is no “one recipe fits all” set of optimization settings that will give the best results every time. In fact, the best settings vary widely depending on the size and type of design, how it is interconnected, which on-chip resources it requires, and so forth. An expert FPGA designer looks carefully at the design, recognizes key characteristics, and runs numerous trials to see what kind of tuning tricks will give the best results. InTime uses AI to do exactly that sort of experimentation, using a training set that consists of a large number of real world designs.

In this way, InTime is not trying to replace or compete with FPGA tools. Rather, it is supplying an artificially-intelligent expert user of those tools. Plunify is constantly evolving the training set – removing old design results that are no longer applicable to the current chips and tool versions. The biggest challenge for InTime is when a new FPGA device comes out with new characteristics. It takes a large amount of design crunching for the system to become expert on the new device and its quirks.

Between InTime and InTime Service, Plunify seems to cover the gamut for high-end FPGA design. For companies who want to keep their entire design environment siloed, InTime is a self-contained expert AI FPGA designer, ready to crank out constraint-meeting designs without asking for overtime pay or vacation. For companies with only the occasional crunch-time requirement to “just get it done” on a difficult FPGA-based project, InTime Service offers a zero-risk way to bring in the big guns. It will be interesting to watch how this approach catches on with the larger FPGA design community.

7 thoughts on “Plunify – The Big FPGA Guns”

  1. This seems like a task for machine learning. The FPGA manufacturers have 1,000s of designs and the optimum tool settings their experts used to help customers close timing on those designs. Yep a computer could do this job.

    1. Yep, it is a ML problem. (I am from Plunify) A key component to solving this is the abundance of cheap cloud computing power. We have deployed up to 500 cloud servers a day for just 8 hours, running multiple designs concurrently. At a first glance, it might seem like brute force. But if you consider the possible combinations of settings (there are billions of combinations) versus the runs we do, what we are aiming for is very small in comparison and quite targeted.

      A big gun requires powerful bullets – InTime(ML) is the gun and cloud computing is the bullets!

      1. @kirvy: ” But if you consider the possible combinations of settings (there are billions of combinations) versus the runs we do, what we are aiming for is very small in comparison and quite targeted.”
        How do you choose which settings to run?

  2. @Beercandyman – Yep, that is what Plunify is doing. Machine learning that learns to operate the FPGA tools based on a large training set of designs. It’s interesting that it is a third party (Plunify) doing this rather than the FPGA companies themselves. Similarly, with both FPGA companies touting how good their devices are for data center acceleration, it’s interesting that they have not yet applied that to their own tools. There are already pools of FPGAs in data centers – like Amazon EC2 F1 instances – which would seem on the surface like an ideal vehicle for accelerating FPGA place-and-route and timing optimization, for example. Granted, these are very difficult applications to parallelize and accelerate, but it seems like it would also be a solid proof point for FPGA-based acceleration. Of course, FPGA companies are first and foremost focused on getting their tools to work well on the types of machines most designers use today.

  3. This is very interesting and allows the easy deployment of FPGAs for AI applications without any expertise on FPGA programming.
    We are working also towards the seamless integration of FPGA for data analytics and machine learning applications using e.g. Spark in the european-funded VINEYARD research project. http://vineyard-h2020.eu
    The easy integration of FPGAs from high programming frameworks like Spark will allow the widespread adoption of FPGAs in the data centers (e.g. a company working on this end is http://www.inaccel.com )

Leave a Reply

featured blogs
Oct 23, 2020
Processing a component onto a PCB used to be fairly straightforward. Through hole products, a single or double row surface mount with a larger center-line rarely offer unique challenges obtaining a proper solder joint. However, as electronics continue to get smaller and conne...
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...
Oct 23, 2020
Any suggestions for a 4x4 keypad in which the keys aren'€™t wobbly and you don'€™t have to strike a key dead center for it to make contact?...
Oct 23, 2020
At 11:10am Korean time this morning, Cadence's Elias Fallon delivered one of the keynotes at ISOCC (International System On Chip Conference). It was titled EDA and Machine Learning: The Next Leap... [[ Click on the title to access the full blog on the Cadence Community ...

featured video

Demo: Low-Power Machine Learning Inference with DesignWare ARC EM9D Processor IP

Sponsored by Synopsys

Applications that require sensing on a continuous basis are always on and often battery operated. In this video, the low-power ARC EM9D Processors run a handwriting character recognition neural network graph to infer the letter that is written.

Click here for more information about DesignWare ARC EM9D / EM11D Processors

featured paper

Designing highly efficient, powerful and fast EV charging stations

Sponsored by Texas Instruments

Scaling the necessary power for fast EV charging stations can be challenging. One solution is to use modular power converters stacked in parallel. Learn more in our technical article.

Click here to download the technical article

Featured Chalk Talk

Thermal Management Solutions

Sponsored by Mouser Electronics and Panasonic

With shrinking form factors, tighter power budgets, and higher performance, thermal management can be a challenge in today’s designs. It might be time to bust out the thermal grease to help conduct away some of that excess heat. But, before you grab that tube, check out this episode of Chalk Talk where Amelia Dalton chats with Len Metzger of Panasonic about the details, drawbacks, and design considerations when using thermal grease - and its alternatives.

Click here for more information about Panasonic Thermal Management Solutions