feature article
Subscribe Now

Plunify – The Big FPGA Guns

AI Your FPGA

Eyes scan the optimization options like a master pool player about to run the table against a hapless opponent. Solids and stripes form a map – a complex mathematical model where LUTs and connections melt away and a unique strategy emerges. Quick optimization to get the lay of the land, 7 ball in the side pocket, multiple runs for timing, park the cue ball left to set up for the 5 in the corner, check power and Fmax, defensive move behind the 8 ball. The physics of the situation evolve, and the expert plays them like an instrument.

Many of us wear our FPGA expertise like badges of honor. We have spent decades learning the ins and outs – the quick tricks to move the design forward – always mindful of our adversary, the ghost in the machine who strives silently to give us something to push against – to bring out our best. It’s a great feeling when a defiant web of nets and nodes yields to our power and crystallizes into a unique and perfect SRAM snowflake where power, performance, timing, and functionality all coexist in an elusive arrangement of layout and logic.

Then, sometimes, we need to just get it done.

There are projects where we find a 7-digit number of LUTs looming over our heads without enough CPU cycles available to conquer them. Confounded by conflicting constraints and with an impossible schedule deficit weighing on our team, our egos take a back seat to practical reality. This is not the time to don our superhero cape. One of the hallmarks of a true expert is knowing one’s limits and recognizing when expertise isn’t enough. That’s when we need the job to just be done. That’s when we reluctantly reach for the big red handle and pull.

That’s when we need Plunify.

We’ve talked about Plunify before. Plunify began life as a provider of FPGA tools in the cloud. As most of us know, selling the idea of any design tool in the cloud has historically been an uphill struggle, as most design teams guard their IP jealously, and uploading hard-earned designs into distant machines is a scary proposition despite the utmost assurances of rock-solid security. Then, Plunify pivoted. They began to build an AI engine for FPGA optimization. Their machine-learning algorithms burned zillions of machine cycles running countless designs through FPGA tools and evaluating the results. Over time, those algorithms evolved recipes for success that – given access to enough compute power – could consistently beat the best human FPGA experts at the game of design closure.

Plunify then offered their AI FPGA optimization engine and recipes to companies who wanted to use their own large compute farms to crank out steady streams of super-challenging FPGA designs. Plunify earned their stripes and created a solid niche business giving some of the biggest users of FPGAs a secret weapon for improving time-to-market and for squeezing the last drop of performance and capacity out of big FPGAs – all within the secure walls of their own data centers. Plunify’s “InTime” mastered the reins of Intel/Altera’s Quartus and XIlinx’s Vivado and ISE tools, claiming performance gains in the 20% range simply by letting machine learning and massive compute power work their magic on the standard FPGA tools, finding the elusive combination of options that yield a solution that meets the critical constraints on challenging designs.

Of course, Plunify didn’t want to be a “secret” weapon. They wanted to build a big market for their technology, helping the whole FPGA world to get more oomph out of FPGAs with less headache and uncertainty. So, while Plunify’s most successful customers kept their wins on the DL, the company began work on a plan to bring the power of their AI FPGA wizardry to the masses. Recently, they have “as a service-d” InTime – rolling out a new offering called Plunify InTime Service.

With InTime Service, you use Plunify as your design closure consultant. You give them your design and your constraints, and they unleash the dragons with a giant compute farm aiming the InTime recipes at your problem. And, in the best “show me the money” tradition of confidence, you pay only if Plunify is able to hit your numbers. Now, the big red handle is available to any design team – not just those with the resources for a dedicated in-house data center for FPGA optimization. Even if you are a certified black-belt FPGA expert, your time is probably better spent getting your project finished rather than spending weeks tweaking options and manually iterating your design in order to maybe – or maybe not – ultimately satisfy your constraints.

Why, you may ask, is InTime able to get better results than what we all get just running the FPGA tools according to the FPGA companies’ instructions? After all, those FPGA companies have enormous teams of engineers working day and night to get the best results from their tools so that their chips will outperform those of their competitors. This is a complex question worthy of some consideration. First, FPGA companies must build tools to work on a lowest-common-denominator hardware configuration. If they were to try to squeak the absolute most capability out of them by optimizing them for big, fast, parallel compute environments, they’d leave a lot of less-well-funded customers behind. Any dependence they put on high-end hardware configurations must be balanced with agility on the smallest, weakest compute platforms they support.

Similarly, FPGA tools get better results with more processing. When FPGA companies tune their “just push the button” mode, they have to carefully consider how long the average user is willing to wait for results. Sometimes we want quick-and-dirty answers, and sometimes we want to squeeze out every last drop of performance. In order to facilitate that, FPGA companies have to give us reasonable controls on the tools, but without making them overly complicated. That means that the default can never be the “best possible answer” mode. For that, we need to exercise the almost innumerable tuning and optimization options in the tools – and THAT is where expert users come in.

Because there are so many “expert mode” options in FPGA tools, it pays to have an expert driving them. There is no “one recipe fits all” set of optimization settings that will give the best results every time. In fact, the best settings vary widely depending on the size and type of design, how it is interconnected, which on-chip resources it requires, and so forth. An expert FPGA designer looks carefully at the design, recognizes key characteristics, and runs numerous trials to see what kind of tuning tricks will give the best results. InTime uses AI to do exactly that sort of experimentation, using a training set that consists of a large number of real world designs.

In this way, InTime is not trying to replace or compete with FPGA tools. Rather, it is supplying an artificially-intelligent expert user of those tools. Plunify is constantly evolving the training set – removing old design results that are no longer applicable to the current chips and tool versions. The biggest challenge for InTime is when a new FPGA device comes out with new characteristics. It takes a large amount of design crunching for the system to become expert on the new device and its quirks.

Between InTime and InTime Service, Plunify seems to cover the gamut for high-end FPGA design. For companies who want to keep their entire design environment siloed, InTime is a self-contained expert AI FPGA designer, ready to crank out constraint-meeting designs without asking for overtime pay or vacation. For companies with only the occasional crunch-time requirement to “just get it done” on a difficult FPGA-based project, InTime Service offers a zero-risk way to bring in the big guns. It will be interesting to watch how this approach catches on with the larger FPGA design community.

7 thoughts on “Plunify – The Big FPGA Guns”

  1. This seems like a task for machine learning. The FPGA manufacturers have 1,000s of designs and the optimum tool settings their experts used to help customers close timing on those designs. Yep a computer could do this job.

    1. Yep, it is a ML problem. (I am from Plunify) A key component to solving this is the abundance of cheap cloud computing power. We have deployed up to 500 cloud servers a day for just 8 hours, running multiple designs concurrently. At a first glance, it might seem like brute force. But if you consider the possible combinations of settings (there are billions of combinations) versus the runs we do, what we are aiming for is very small in comparison and quite targeted.

      A big gun requires powerful bullets – InTime(ML) is the gun and cloud computing is the bullets!

      1. @kirvy: ” But if you consider the possible combinations of settings (there are billions of combinations) versus the runs we do, what we are aiming for is very small in comparison and quite targeted.”
        How do you choose which settings to run?

  2. @Beercandyman – Yep, that is what Plunify is doing. Machine learning that learns to operate the FPGA tools based on a large training set of designs. It’s interesting that it is a third party (Plunify) doing this rather than the FPGA companies themselves. Similarly, with both FPGA companies touting how good their devices are for data center acceleration, it’s interesting that they have not yet applied that to their own tools. There are already pools of FPGAs in data centers – like Amazon EC2 F1 instances – which would seem on the surface like an ideal vehicle for accelerating FPGA place-and-route and timing optimization, for example. Granted, these are very difficult applications to parallelize and accelerate, but it seems like it would also be a solid proof point for FPGA-based acceleration. Of course, FPGA companies are first and foremost focused on getting their tools to work well on the types of machines most designers use today.

  3. This is very interesting and allows the easy deployment of FPGAs for AI applications without any expertise on FPGA programming.
    We are working also towards the seamless integration of FPGA for data analytics and machine learning applications using e.g. Spark in the european-funded VINEYARD research project. http://vineyard-h2020.eu
    The easy integration of FPGAs from high programming frameworks like Spark will allow the widespread adoption of FPGAs in the data centers (e.g. a company working on this end is http://www.inaccel.com )

Leave a Reply

featured blogs
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Trends and Solutions for Next Generation Energy Storage Systems
Sponsored by Mouser Electronics and onsemi
Increased installations of DC ultra fast chargers, the rise of distributed grid systems, and a wider adoption of residential solar installations are making robust energy storage systems more important than ever before. In this episode of Chalk Talk, Amelia Dalton, Hunter Freberg and Prasad Paruchuri from onsemi examine trends in EV chargers, solar, and energy storage systems, the role that battery storage integration plays in energy storage systems, and how onsemi is promoting innovation in the world of energy storage systems.
Jan 29, 2024
12,323 views