feature article
Subscribe Now

Xilinx Brings AI to FPGA EDA

For decades, FPGA companies have been bragging about how their devices can be used to accelerate all sorts of compute-intensive functions, and they have built a large market for themselves convincing design teams that they can handle the toughest problems with aplomb – speeding up execution and reducing power demands. 

“Hey!” said this one guy in the back.”How come you don’t accelerate your own design tools, then?”

This always started a bout of hand wringing and doubletalk and then a marketing person would step up and say some words about target market generality and solution portability and user platform preferences and therefore… What was the question again?

Now, FPGAs are a big deal for accelerating AI inference, and we all know that AI is useful for… just about everything, right? 

Uh, oh. Here comes “guy in the back” again. “Uh, if AI is so great, why don’t you use it to make your design tools work better?”

“Hey,” said Xilinx. “Let’s do just that!” (Editor’s note: These may not have been their actual words.)

Nonetheless, Xilinx has now announced Vivado ML Editions, and it looks like they’ve achieved some pretty impressive results using machine learning to improve their FPGA design tools. The tl;dr of this is that Xilinx has used the magic of AI along with some not-quite-magic hierarchical design flow optimization to improve quality of results (QoR for those in the know) in the range of 10%-50% over a wide range of design styles. And, at the same time, they have achieved a “5x-17x compile time reduction.” (Yeah, we know. “5x reduction” isn’t a thing. What they really mean to say is an 80% to 94% reduction in compile time).

So, those numbers sound like a big deal, because they are. QoR gains are historically hard fought. Engineers working on algorithms for tasks such as synthesis and place-and-route are overjoyed if they can make an algorithm change that will improve results by an average of 1-2%, and even then they will typically have to show that over a large range of design types, with some designs improving and some getting worse, but an average improvement. It appears that Xilinx’s AI additions only make things better.

Compiling FPGA designs, even with the best tools, is somewhat of an art. Expert users of design tools can wave their magic wands, adjust optimization options and constraints, spend enormous amounts of CPU time, and achieve results that are significantly better than what the average user will get firing up the tool and pressing the big green button. 

Part of what Xilinx is doing is throwing AI models in as the new expert. They have trained their tools over an enormous number of designs, and the AI can come much closer than in the past to achieving optimal settings on the first try – ergo – better results faster. Different parts of the design flow benefit best from different ML approaches, so Xilinx has thrown a wide gamut of trained models at the problem, with impressive results. 

Xiinx says average design size has been growing at a whopping 38% CAGR, and with the current devices packing something like 92 billion transistors, that makes the EDA problem rapidly approach “untenable.” Legacy tools that were already struggling to keep up with the trends needed some help fast, and throwing a little AI at the problem may be just the right step at the right time. 

Looking at the timing closure problem, for example, Xilinx has created what they call “Intelligent Design Runs” or ‘“IDR,” which use ML-based congestion estimation to see where the hot spots will be for routing, predicting the router behavior, and working to avoid congestion, ML-based delay estimation to find the timing-closure hot spots with better delay predictions for complex routing, and ML-based strategy prediction, which picks from 60+ custom strategies based on 100,000+ designs worth of training data. IDR can dramatically reduce the number of design iterations required to achieve timing closure, taking a big bite out of an unpredictable chunk of the project schedule. 

The key here is that there are definite patterns in electronic designs, particularly with the re-use of large IP blocks and with hierarchical design techniques. This plays well into the hands of AI training, taking advantage of the complex pattern recognition abilities of AI/ML to choose strategies that have succeeded in similar designs. 

Of course modern designs aren’t made in a one-shot vacuum. Xilinx has taken advantage of the incremental, hierarchical, team-based nature of most projects to grab another huge helping of performance with hierarchical module-based compilation and what they are calling the “Abstract Shell.” In hierarchical module-based compilation, the user defines reconfigurable modules (RMs), and the tool flow will compile and close timing only on what’s contained in a defined RM. This hierarchical compilation approach gives a significant speedup as engineers can work in parallel on different parts of the design, and it also provides secure handoff, as the active design files do not contain the rest of the design. The Abstract Shell decouples the compilation of the platform’s interfaces from the processing kernels. This also allows design teams to easily create and re-use their own dynamic IP, which works to improve collaborative design by promoting a team-based design methodology, allowing for a divide-and-conquer strategy and enabling multisite cooperation for large designs.

This block-based approach extends to dynamic reconfiguration using what Xilinx calls Dynamic Function eXchange (DFX) which saves silicon real-estate by allowing custom hardware accelerators to be dynamically loaded and swapped at runtime. DFX accelerates complex applications and facilitates remote hardware upgrades and over-the-air updates while the device is in operation.

These upgrades to Vivado are a welcome improvement in the face of the continual rapid growth of FPGA design and complexity and should be just a first round showing the potential of applying AI/ML techniques to FPGA EDA. One could see a future where FPGAs are used to accelerate the AI/ML portion of the FPGA design flow, finally completing the circle.

One thought on “Xilinx Brings AI to FPGA EDA”

Leave a Reply

featured blogs
May 8, 2024
Learn how artificial intelligence of things (AIoT) applications at the edge rely on TSMC's N12e manufacturing processes and specialized semiconductor IP.The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on Chip Design....
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Intel AI Update
Sponsored by Mouser Electronics and Intel
In this episode of Chalk Talk, Amelia Dalton and Peter Tea from Intel explore how Intel is making AI implementation easier than ever before. They examine the typical workflows involved in artificial intelligence designs, the benefits that Intel’s scalable Xeon processor brings to AI projects, and how you can take advantage of the Intel AI ecosystem to further innovation in your next design.
Oct 6, 2023
28,084 views