feature article
Subscribe Now

Three “I”s of FPGA Design: Iterations, Incremental and Intelligent Design Tools

The flexibility offered by field-programmable gate arrays (FPGAs) has made design iterations an integral part of the FPGA design process. Traditionally, engineers quickly wrote hardware description language (HDL) for their design, ran synthesis and place-and-route on it, programmed the FPGA and tested design functionality directly in hardware. If a performance issue or a functional bug was discovered, appropriate modifications were made to the HDL, followed by re-synthesis and re-place-and-route to obtain a new FPGA bit-stream and re-testing the hardware. This flow was fast enough to easily allow a few iterations in one day.

With the increasing size and complexity of the latest generation of FPGAs and the system designs targeted towards these FPGAs, achieving timing closure has increased the need for iterations even more. However, using the iterative process as before has become extremely challenging now. A key issue is the significant increase in runtime for these designs – both during synthesis, and especially during place-and-route. It is no longer feasible to turn-around more than one run of a FPGA in a day.

Another issue is the unpredictability of achieving timing closure from one run to the next – hours spent to meet performance goals on a critical block may be completely lost due to a minor change made in another portion of the design, exponentially increasing the time required to complete the design. As a result, an important consideration for choosing FPGAs, achieving faster time-to-market, has been severely curtailed. Therefore, new design approaches are required to regain the time-to-market advantage of the latest generation of FPGAs and complex system-on-chip designs.

Current incremental “solutions”

Since it is unrealistic to eliminate design iterations from the FPGA design process, a mechanism to minimize the impact of incremental changes on a design is required. To accomplish this, a variety of incremental “solutions” have been proposed and implemented either by the designers or by design tools.

Some designers use a bottom-up flow that gives them full control over the hierarchy of the design whereby only the hierarchical blocks that have changed are manually re-synthesized or re-placed-and-routed. In this incremental flow, since each hierarchical block is synthesized separately, no cross-boundary optimizations are possible. These optimizations vary from design-to-design based on HDL coding styles and design implementations; but these optimizations generally have a big impact on the area utilization and timing performance of the design. Therefore, apart from being a completely manual process, the design’s quality-of-results (QoR) may be sub-optimal when using this approach.

The manual bottom-up incremental flow just described has been automated in some design tools with a block or partition-based methodology. The designer works in a top-down environment and sets partitions on lower-level hierarchical blocks of their design using HDL or script attributes. These partitions have to be identified early on in the design process; the design tools create a hard boundary for these blocks and all cross-boundary optimizations across partitions are disabled. Therefore, if a change affects the hierarchical block defined as a design partition, only that partition is re-synthesized and re-placed-and-routed; the rest of the design implementation is preserved from one run to the next. Although this approach reduces the design runtime and increases timing closure predictability, it has two major drawbacks – identification of partitions early in the design process and degradation of design’s QoR.

Identification of partitions early in the design process is difficult since this requires prior knowledge of hierarchical blocks that may have an incremental change – it is hard to predict where the design may have a performance or functional issue. Therefore, a user does not have a clear idea on defining optimal partitions. To solve this dilemma a designer may be tempted to define every major hierarchical block of their design as a partition. However, this can have a major impact on the QoR of the design due to the same cross-boundary optimization limitations described earlier.

Ideal incremental solution

The ideal solution, from a designer-perspective, is to have a truly intelligent and automatic incremental design flow – one that does not require prior planning or definition of partitions at the start of the design process, and does not prevent either synthesis or place-and-route (P&R) design tools from optimizing the design as needed to obtain optimal results. This must be a truly “push-button” approach where the designer does not have to change their design methodology or manage the incremental changes.

For this flow to be effective, the synthesis tool should be intelligent in order to automatically detect HDL changes that truly impact the design functionality and not require any user attributes. Automatic detection of changes must not be based on file timestamps and the synthesis tool must filter out any HDL changes (such as comments), that have no effect on design functionality. This incremental synthesis solution must also use all of the necessary optimization techniques across hierarchy boundaries and produce optimal QoR for the design. Another aspect that is critical for achieving timing closure is for the synthesis tool to evaluate the impact of an incremental change globally on the design.If a new critical path is created in another portion of the design due to a minor change in a block, then the tool must incrementally optimize the new critical path. This global optimization is only possible if the synthesis tool is capable of automatically propagating top-level constraints across hierarchy instead of requiring designers to specify constraints at block-level. This will prevent surprises later in the flow and reduce the number of design iterations.

Automatic incremental synthesis is a key enabling technology for effectiveness of the next step in FPGA design flow: incremental P&R. By generating a minimally changed netlist, preserving names of unchanged objects in the netlist, and optimizing the design completely, the synthesis tool makes the task of P&R tools easier. The P&R tool needs to update placement and routing of just the changed objects and use the same placement and routing for unchanged portions of the design. This reduces the P&R runtime drastically and increases predictability of achieving timing closure faster as the performance of unchanged portions of the design is preserved.

Conclusion

Iterations are inevitable for today’s complex FPGAs and system-on-chip designs. The best solution to manage and minimize effects of these iterations on designs is by using automatic incremental synthesis and place-and-route techniques. Intelligent design tools incorporate these techniques in a fashion that allows users to maintain their existing methodologies, focus on their design functionality, reduce design runtime and achieve timing closure faster. Iterations, Incremental and Intelligent design tools are the three “I”s of FPGA design, but there is a fourth critical “I” in this mix – “I” the designer. It is this “I” who chooses to use intelligent design tools and can bring back the magic of faster time-to-market for FPGA designs…

Leave a Reply

featured blogs
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 23, 2024
We explore Aerospace and Government (A&G) chip design and explain how Silicon Lifecycle Management (SLM) ensures semiconductor reliability for A&G applications.The post SLM Solutions for Mission-Critical Aerospace and Government Chip Designs appeared first on Chip ...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

PolarFire® SoC FPGAs: Integrate Linux® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
10,696 views