feature article
Subscribe Now

Selecting the Right FPGA Synthesis Tool

Looking Beyond the Obvious

When it comes to successful FPGA implementation, synthesis serves as arguably the most influential step in ensuring that design requirements are met and the product can be shipped. In many cases, design requirements refer to performance and area—the design needs to operate at a minimum frequency and it needs to fit into the selected device. Hence, designers or CAD managers looking to standardize on a synthesis tool tend to look for good out-of-the-box quality-of-results (QoR).

The criteria for selecting the right synthesis tool, however, can—and should—be involve more than this. Adequate out-of-the-box QoR is a legitimate starting point, but it is not the whole story in terms of a successful FPGA design methodology. Often, to meet aggressive QoR goals, constraints need to be refined or small portions of RTL may need to be re-coded—but without help from the tool it is difficult to identify these areas. In other cases QoR goals may have been met but design changes are constantly being introduced, and either the new changes degrade the previous QoR results or the tool’s runtime of iterating each new change delays the project schedule. FPGA designers and managers must take into account these other scenarios and understand how to addresses them before selecting the right tool.

Quality-of-Results

As mentioned, a tool’s ability to produce adequate quality-of-results is an important criterion and should not be downplayed. If the FPGA can not operate at the required frequency, the product simply cannot ship. And if the design cannot fit into the selected device, then using a larger device may affect the product’s cost structure. It should not be a surprise that a common reason why users decide to move to 3rd party EDA synthesis tools is because they are not meeting the desired QoR requirements with their free vendor tools.

While there are many features contributing to QoR such as advanced technology inference, retiming, and resource sharing, one capability to be considered carefully is the tool’s  physical synthesis support. Physical synthesis, as opposed to standard RTL synthesis, is a method of synthesizing RTL based on the physical characteristics of the target FPGA device. While regular RTL synthesis only takes into account logic cell delays and wire load timing models in their algorithms, physical synthesis produces an optimized netlist by taking into account placement or potential placement of logic on the device and routing resources. Different vendors have different approaches to physical synthesis, but the goal is the same—improve design performance—particularly for high-end devices or designs with aggressive performance goals.

Designers should pay particular attention to the device support offered by the vendor’s physical synthesis. An impressive optimization technology has limited use if it’s only available for a limited number of devices. Even if it supports the device of their current project, designers typically want the freedom to switch devices or FPGA families and still have all the tool’s optimization technologies at their disposal. Ease-of-use is also another factor. Traditional methods of FPGA physical synthesis require expert knowledge of the chip, but many users may not want to invest time acquiring such device knowledge and seek a more push-button physical synthesis flow.

Design Analysis

As design complexity continues to increase, design analysis remains a critical component of the synthesis flow. In an ideal scenario, a design is imported into the tool, a button is pushed, and a netlist is generated that meets all requirements—and no analysis is needed. Unfortunately, this scenario is neither common nor realistic. Few engineers will get the design and timing constraints right the first time, and for aggressive QoR goals, synthesis trade-offs may need to be explored by the user. A good tool has the right analysis features to help analyze constraints, analyze the design itself, and guide the user through architectural trade-offs to meet timing or area goals.

Particularly important is the ability to analyze trade-offs when optimizing for either performance or area. Descriptive timing reports, graphical views of critical paths, and the ability to cross-probe to relevant HDL are examples of the analysis features needed to understand QoR bottlenecks.

Another example is the ability to graphically analyze and control allocation of embedded memory or DSP resources on the device. Embedded resources are specialized areas on the FPGA that can map certain operators more efficiently for either performance, area, or both. The ability to graphically analyze how RTL functions will be mapped—before synthesis is performed—allows for a user to make changes as needed to obtain superior results. Figure 1 shows a graphical view of all operators in a design that can be potentially mapped to embedded memory or DSP resources. Equally important is the ability to cross probe from such a view to the relevant design schematic or HDL source.

20080916_mentor_fig1.gif
Figure 1 : Graphical Resource Analysis

In fact, the ability to cross-probe to and from different graphical and textual views is foundational to a good analysis environment. A user should be able to cross-probe from generated reports, to the post-synthesis technology schematic, to the generic RTL schematic, to the relevant HDL code to understand all issues in their appropriate contexts.

Incremental Flows

Even when QoR goals are met, they risk being lost in subsequent iterations of the design. Time spent meeting performance goals on critical blocks may be lost due to changes in other areas of the design. This and the time lost from long synthesis and place-and-route run-times can jeopardize project schedule and delivery. For this reason, a tool’s incremental flow is important to consider.

There are a variety of incremental flows available, and some of these require pre-planning, such as the block-based partition flow. Perhaps the ideal solution to look for is a fully automatic incremental synthesis flow, where the user does not need to identify logic areas that may potentially be changed in future runs. Equally important is that the flow not prevent full optimization of the design to obtain best results. Coupled with an incremental place-and-route flow, such a solution can provide significant time savings and QoR preservation.

Conclusion

There are naturally other criteria to consider for FPGA synthesis, depending on the application. The point to remember is that selecting the right tool is not a one-dimensional exercise. Even when performance or area is the highest priority, capabilities such as analysis and incremental support can directly relate to quality-of-results. Designers and managers should realistically consider the various stages of their RTL-to-FPGA-implementation flow and select the synthesis tool that best supports each stage of the flow.

10 thoughts on “Selecting the Right FPGA Synthesis Tool”

  1. Pingback: GVK Biosciences
  2. Pingback: Judi Togel Terbaik
  3. Pingback: DMPK
  4. Pingback: zd porn
  5. Pingback: Filosofi Bola Kaki
  6. Pingback: SCR888 Casino
  7. Pingback: cpns2016.com

Leave a Reply

featured blogs
Dec 4, 2020
As consumers, wireless technology is often taken for granted. How difficult would everyday life be without it? Can I open my garage door today? How do I turn on my Smart TV? Where are my social networks? Most of our daily wireless connections – from Wi-Fi and Bluetooth ...
Dec 4, 2020
I hear Percepio will be introducing the latest version of their Tracealyzer and their new DevAlert IoT device monitoring and remote diagnostics solution....
Dec 4, 2020
[From the last episode: We looked at an IoT example involving fleets of semi-trailers.] We'€™re now going to look at energy and how electronics fit into the overall global energy story. Whether it'€™s about saving money on electricity at home, making data centers more eff...
Dec 4, 2020
A few weeks ago, there was a webinar about designing 3D-ICs with Innovus Implementation. Although it was not the topic of the webinar, I should point out that if your die is more custom/analog, then... [[ Click on the title to access the full blog on the Cadence Community si...

featured video

Improve SoC-Level Verification Efficiency by Up to 10X

Sponsored by Cadence Design Systems

Chip-level testbench creation, multi-IP and CPU traffic generation, performance bottleneck identification, and data and cache-coherency verification all lack automation. The effort required to complete these tasks is error prone and time consuming. Discover how the Cadence® System VIP tool suite works seamlessly with its simulation, emulation, and prototyping engines to automate chip-level verification and improve efficiency by ten times over existing manual processes.

Click here for more information about System VIP

featured paper

How to optimize an OpenCL Kernel for the data center using Silexica's SLX FPGA

Sponsored by Silexica

FPGAs are being increasingly employed as co-processors in data centers. This application note explains how SLX FPGA accelerates a Fintech design example, leveraging Xilinx’s Vitis Platform’s bottom-up flow, Alveo U200 accelerator card, and Vitis quantitative finance library.

Click here to download the whitepaper

Featured Chalk Talk

Mom, I Have a Digital Twin? Now You Tell Me?

Sponsored by Cadence Design Systems

Today, one engineer’s “system” is another engineer’s “component.” The complexity of system-level design has skyrocketed with the new wave of intelligent systems. In this world, optimizing electronic system designs requires digital twins, shifting left, virtual platforms, and emulation to sort everything out. In this episode of Chalk Talk, Amelia Dalton chats with Frank Schirrmeister of Cadence Design Systems about system-level optimization.

Click here for more information