feature article
Subscribe Now

Three “I”s of FPGA Design: Iterations, Incremental and Intelligent Design Tools

The flexibility offered by field-programmable gate arrays (FPGAs) has made design iterations an integral part of the FPGA design process. Traditionally, engineers quickly wrote hardware description language (HDL) for their design, ran synthesis and place-and-route on it, programmed the FPGA and tested design functionality directly in hardware. If a performance issue or a functional bug was discovered, appropriate modifications were made to the HDL, followed by re-synthesis and re-place-and-route to obtain a new FPGA bit-stream and re-testing the hardware. This flow was fast enough to easily allow a few iterations in one day.

With the increasing size and complexity of the latest generation of FPGAs and the system designs targeted towards these FPGAs, achieving timing closure has increased the need for iterations even more. However, using the iterative process as before has become extremely challenging now. A key issue is the significant increase in runtime for these designs – both during synthesis, and especially during place-and-route. It is no longer feasible to turn-around more than one run of a FPGA in a day.

Another issue is the unpredictability of achieving timing closure from one run to the next – hours spent to meet performance goals on a critical block may be completely lost due to a minor change made in another portion of the design, exponentially increasing the time required to complete the design. As a result, an important consideration for choosing FPGAs, achieving faster time-to-market, has been severely curtailed. Therefore, new design approaches are required to regain the time-to-market advantage of the latest generation of FPGAs and complex system-on-chip designs.

Current incremental “solutions”

Since it is unrealistic to eliminate design iterations from the FPGA design process, a mechanism to minimize the impact of incremental changes on a design is required. To accomplish this, a variety of incremental “solutions” have been proposed and implemented either by the designers or by design tools.

Some designers use a bottom-up flow that gives them full control over the hierarchy of the design whereby only the hierarchical blocks that have changed are manually re-synthesized or re-placed-and-routed. In this incremental flow, since each hierarchical block is synthesized separately, no cross-boundary optimizations are possible. These optimizations vary from design-to-design based on HDL coding styles and design implementations; but these optimizations generally have a big impact on the area utilization and timing performance of the design. Therefore, apart from being a completely manual process, the design’s quality-of-results (QoR) may be sub-optimal when using this approach.

The manual bottom-up incremental flow just described has been automated in some design tools with a block or partition-based methodology. The designer works in a top-down environment and sets partitions on lower-level hierarchical blocks of their design using HDL or script attributes. These partitions have to be identified early on in the design process; the design tools create a hard boundary for these blocks and all cross-boundary optimizations across partitions are disabled. Therefore, if a change affects the hierarchical block defined as a design partition, only that partition is re-synthesized and re-placed-and-routed; the rest of the design implementation is preserved from one run to the next. Although this approach reduces the design runtime and increases timing closure predictability, it has two major drawbacks – identification of partitions early in the design process and degradation of design’s QoR.

Identification of partitions early in the design process is difficult since this requires prior knowledge of hierarchical blocks that may have an incremental change – it is hard to predict where the design may have a performance or functional issue. Therefore, a user does not have a clear idea on defining optimal partitions. To solve this dilemma a designer may be tempted to define every major hierarchical block of their design as a partition. However, this can have a major impact on the QoR of the design due to the same cross-boundary optimization limitations described earlier.

Ideal incremental solution

The ideal solution, from a designer-perspective, is to have a truly intelligent and automatic incremental design flow – one that does not require prior planning or definition of partitions at the start of the design process, and does not prevent either synthesis or place-and-route (P&R) design tools from optimizing the design as needed to obtain optimal results. This must be a truly “push-button” approach where the designer does not have to change their design methodology or manage the incremental changes.

For this flow to be effective, the synthesis tool should be intelligent in order to automatically detect HDL changes that truly impact the design functionality and not require any user attributes. Automatic detection of changes must not be based on file timestamps and the synthesis tool must filter out any HDL changes (such as comments), that have no effect on design functionality. This incremental synthesis solution must also use all of the necessary optimization techniques across hierarchy boundaries and produce optimal QoR for the design. Another aspect that is critical for achieving timing closure is for the synthesis tool to evaluate the impact of an incremental change globally on the design.If a new critical path is created in another portion of the design due to a minor change in a block, then the tool must incrementally optimize the new critical path. This global optimization is only possible if the synthesis tool is capable of automatically propagating top-level constraints across hierarchy instead of requiring designers to specify constraints at block-level. This will prevent surprises later in the flow and reduce the number of design iterations.

Automatic incremental synthesis is a key enabling technology for effectiveness of the next step in FPGA design flow: incremental P&R. By generating a minimally changed netlist, preserving names of unchanged objects in the netlist, and optimizing the design completely, the synthesis tool makes the task of P&R tools easier. The P&R tool needs to update placement and routing of just the changed objects and use the same placement and routing for unchanged portions of the design. This reduces the P&R runtime drastically and increases predictability of achieving timing closure faster as the performance of unchanged portions of the design is preserved.

Conclusion

Iterations are inevitable for today’s complex FPGAs and system-on-chip designs. The best solution to manage and minimize effects of these iterations on designs is by using automatic incremental synthesis and place-and-route techniques. Intelligent design tools incorporate these techniques in a fashion that allows users to maintain their existing methodologies, focus on their design functionality, reduce design runtime and achieve timing closure faster. Iterations, Incremental and Intelligent design tools are the three “I”s of FPGA design, but there is a fourth critical “I” in this mix – “I” the designer. It is this “I” who chooses to use intelligent design tools and can bring back the magic of faster time-to-market for FPGA designs…

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Switch to Simple with Klippon Relay
In this episode of Chalk Talk, Amelia Dalton and Lars Hohmeier from Weidmüller explore the what, where, and how of Weidmüller's extensive portfolio of Klippon relays. They investigate the pros and cons of mechanical relays, the benefits that the Klippon universal range of relays brings to the table, and how Weidmüller's digital selection guide can help you choose the best relay solution for your next design.
Sep 26, 2023
36,405 views