feature article
Subscribe Now

Three “I”s of FPGA Design: Iterations, Incremental and Intelligent Design Tools

The flexibility offered by field-programmable gate arrays (FPGAs) has made design iterations an integral part of the FPGA design process. Traditionally, engineers quickly wrote hardware description language (HDL) for their design, ran synthesis and place-and-route on it, programmed the FPGA and tested design functionality directly in hardware. If a performance issue or a functional bug was discovered, appropriate modifications were made to the HDL, followed by re-synthesis and re-place-and-route to obtain a new FPGA bit-stream and re-testing the hardware. This flow was fast enough to easily allow a few iterations in one day.

With the increasing size and complexity of the latest generation of FPGAs and the system designs targeted towards these FPGAs, achieving timing closure has increased the need for iterations even more. However, using the iterative process as before has become extremely challenging now. A key issue is the significant increase in runtime for these designs – both during synthesis, and especially during place-and-route. It is no longer feasible to turn-around more than one run of a FPGA in a day.

Another issue is the unpredictability of achieving timing closure from one run to the next – hours spent to meet performance goals on a critical block may be completely lost due to a minor change made in another portion of the design, exponentially increasing the time required to complete the design. As a result, an important consideration for choosing FPGAs, achieving faster time-to-market, has been severely curtailed. Therefore, new design approaches are required to regain the time-to-market advantage of the latest generation of FPGAs and complex system-on-chip designs.

Current incremental “solutions”

Since it is unrealistic to eliminate design iterations from the FPGA design process, a mechanism to minimize the impact of incremental changes on a design is required. To accomplish this, a variety of incremental “solutions” have been proposed and implemented either by the designers or by design tools.

Some designers use a bottom-up flow that gives them full control over the hierarchy of the design whereby only the hierarchical blocks that have changed are manually re-synthesized or re-placed-and-routed. In this incremental flow, since each hierarchical block is synthesized separately, no cross-boundary optimizations are possible. These optimizations vary from design-to-design based on HDL coding styles and design implementations; but these optimizations generally have a big impact on the area utilization and timing performance of the design. Therefore, apart from being a completely manual process, the design’s quality-of-results (QoR) may be sub-optimal when using this approach.

The manual bottom-up incremental flow just described has been automated in some design tools with a block or partition-based methodology. The designer works in a top-down environment and sets partitions on lower-level hierarchical blocks of their design using HDL or script attributes. These partitions have to be identified early on in the design process; the design tools create a hard boundary for these blocks and all cross-boundary optimizations across partitions are disabled. Therefore, if a change affects the hierarchical block defined as a design partition, only that partition is re-synthesized and re-placed-and-routed; the rest of the design implementation is preserved from one run to the next. Although this approach reduces the design runtime and increases timing closure predictability, it has two major drawbacks – identification of partitions early in the design process and degradation of design’s QoR.

Identification of partitions early in the design process is difficult since this requires prior knowledge of hierarchical blocks that may have an incremental change – it is hard to predict where the design may have a performance or functional issue. Therefore, a user does not have a clear idea on defining optimal partitions. To solve this dilemma a designer may be tempted to define every major hierarchical block of their design as a partition. However, this can have a major impact on the QoR of the design due to the same cross-boundary optimization limitations described earlier.

Ideal incremental solution

The ideal solution, from a designer-perspective, is to have a truly intelligent and automatic incremental design flow – one that does not require prior planning or definition of partitions at the start of the design process, and does not prevent either synthesis or place-and-route (P&R) design tools from optimizing the design as needed to obtain optimal results. This must be a truly “push-button” approach where the designer does not have to change their design methodology or manage the incremental changes.

For this flow to be effective, the synthesis tool should be intelligent in order to automatically detect HDL changes that truly impact the design functionality and not require any user attributes. Automatic detection of changes must not be based on file timestamps and the synthesis tool must filter out any HDL changes (such as comments), that have no effect on design functionality. This incremental synthesis solution must also use all of the necessary optimization techniques across hierarchy boundaries and produce optimal QoR for the design. Another aspect that is critical for achieving timing closure is for the synthesis tool to evaluate the impact of an incremental change globally on the design.If a new critical path is created in another portion of the design due to a minor change in a block, then the tool must incrementally optimize the new critical path. This global optimization is only possible if the synthesis tool is capable of automatically propagating top-level constraints across hierarchy instead of requiring designers to specify constraints at block-level. This will prevent surprises later in the flow and reduce the number of design iterations.

Automatic incremental synthesis is a key enabling technology for effectiveness of the next step in FPGA design flow: incremental P&R. By generating a minimally changed netlist, preserving names of unchanged objects in the netlist, and optimizing the design completely, the synthesis tool makes the task of P&R tools easier. The P&R tool needs to update placement and routing of just the changed objects and use the same placement and routing for unchanged portions of the design. This reduces the P&R runtime drastically and increases predictability of achieving timing closure faster as the performance of unchanged portions of the design is preserved.

Conclusion

Iterations are inevitable for today’s complex FPGAs and system-on-chip designs. The best solution to manage and minimize effects of these iterations on designs is by using automatic incremental synthesis and place-and-route techniques. Intelligent design tools incorporate these techniques in a fashion that allows users to maintain their existing methodologies, focus on their design functionality, reduce design runtime and achieve timing closure faster. Iterations, Incremental and Intelligent design tools are the three “I”s of FPGA design, but there is a fourth critical “I” in this mix – “I” the designer. It is this “I” who chooses to use intelligent design tools and can bring back the magic of faster time-to-market for FPGA designs…

Leave a Reply

featured blogs
Jun 22, 2021
Have you ever been in a situation where the run has started and you realize that you needed to add two more workers, or drop a couple of them? In such cases, you wait for the run to complete, make... [[ Click on the title to access the full blog on the Cadence Community site...
Jun 21, 2021
By James Paris Last Saturday was my son's birthday and we had many things to… The post Time is money'¦so why waste it on bad data? appeared first on Design with Calibre....
Jun 17, 2021
Learn how cloud-based SoC design and functional verification systems such as ZeBu Cloud accelerate networking SoC readiness across both hardware & software. The post The Quest for the Most Advanced Networking SoC: Achieving Breakthrough Verification Efficiency with Clou...
Jun 17, 2021
In today’s blog episode, we would like to introduce our newest White Paper: “System and Component qualifications of VPX solutions, Create a novel, low-cost, easy to build, high reliability test platform for VPX modules“. Over the past year, Samtec has worked...

featured video

Kyocera Super Resolution Printer with ARC EV Vision IP

Sponsored by Synopsys

See the amazing image processing features that Kyocera’s TASKalfa 3554ci brings to their customers.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

Carmakers charge ahead with electric vehicle powertrain integration

Sponsored by Texas Instruments

When we advance EV powertrain architectures, carmakers can cut system-design cost in half while maximizing power density, increasing efficiency, improving reliability and making EVs more affordable for more people.

Click here to read more

featured chalk talk

Build, Deploy and Manage Your FPGA-based IoT Edge Applications

Sponsored by Mouser Electronics and Intel

Designing cloud-connected applications with FPGAs can be a daunting engineering challenge. But, new platforms promise to simplify the process and make cloud-connected IoT design easier than ever. In this episode of Chalk Talk, Amelia Dalton chats with Tak Ikushima of Intel about how a collaboration between Microsoft and Intel is pushing innovation forward with a new FPGA Cloud Connectivity Kit.

Click here for more information about Terasic Technologies FPGA Cloud Connectivity Kit