feature article
Subscribe Now

From Pinout to Layout:

The FPGA/PCB Balancing Act

When a fifth of all designers say board integration is the biggest trouble spot in getting their product out the door[1], one has to conclude that the FPGA/PCB co-design dilemma remains unsolved for many. Either the FPGA designers continue to make pin assignments that simplify their own design closure goals but complicate those in the PCB domain, or the PCB team locks down pins early in the design cycle only to complicate design closure in the FPGA domain.

Though technology solutions have been available to facilitate pin assignment closure, bridging the FPGA and PCB gap requires collaborative techniques that do not demand that teams adopt new design behaviors. To minimize the iterative process from impacting time-to-market goals and risking product revenue, technologies need to be enhanced in multiple areas of the flow.

One such area is at the FPGA level, where the FPGA team is tasked with RTL development, IP integration, verification, and implementation via synthesis and in-chip place-and-route (P&R). It is the synthesis and P&R stage that is typically impacted when board integration becomes an issue. In an initial iteration, an FPGA designer usually assign pins to clock signals or specific high-speed interface signals, but other than that, the FPGA designer’s preference is to leave the remaining pin assignments to P&R. Even with this flexibility, design closure goals, such as timing and area, may not be met in a single pass, forcing a long trial-and-error process where different optimizations are explored. Today’s largest FPGAs may take many hours for a single synthesis and P&R run, hence several iterations can easily translate into a project delay.

To make matters worse, after all this effort, the PCB team may demand another round of iterations because the pinout assumptions made by the FPGA team have increased the number of PCB signal layers required to complete routing thereby increasing manufacturing costs while simultaneously reducing product reliability.

Escalating Pin Counts Add Up to Cross-Domain Complexity

Modern product designs leveraging standard FPGA capabilities span the megahertz to gigahertz operating frequencies while simultaneously mandating more expensive High Density Interconnect (HDI) manufacturing processes. A FPGA design with as few as 100 signal pins has potentially 9 x 10157 possible I/O assignments. Not all of these pin assignments are “legal” and while the flexibility of FPGA I/O may be leveraged to shorten PCB trace lengths and meet board timing constraints, most PCB designers do not possess the FPGA device expertise to differentiate between “legal” and “illegal” pin assignments. Yet we have FPGA devices that can support over one thousand signal pins, resulting in a productivity quagmire when the FPGA is integrated onto the printed circuit board design. “Time to Market” is not the only business objective to suffer due to cross-domain complexity. FPGAs introduce several challenges into the system design process. Product reliability and manufacturing costs are put at risk, as is the ability to meet system requirements.

Attempting to leverage a traditional system prototype/test/ re-spin design creation process does not yield a convergent process due to the enormous number of possible solutions. A “bad” FPGA pin assignment will result in an increase in PCB signal layers required to route the board, will multiply the number of PCB vias required, and will yield PCB trace lengths that preclude meeting system timing constraints. PCB vias are a source of mechanically-induced electrical failures that degrade product reliability, creating customer satisfaction distress. The challenges of FPGA/ PCB co-design are exacerbated by multi-FPGA PCB design.

Two Domains Seeking a Balance

So… how to simplify the process, shorten the feedback loop, and improve project schedule? The answer lies at both ends of the flow. Both on-chip (FPGA) and off-chip (PCB) technologies need to address the interdependencies.

In an efficient flow, FPGA development is done in parallel with PCB development. Initial FPGA synthesis and P&R runs should be performed to iron out design issues that may be unrelated to pin assignments. However, the sooner the FPGA team receives a reasonable estimate of pin locations the better.

The PCB System Design process does not require a completed FPGA implementation to begin (contrary to popular belief). All that is required is an FPGA signal interface definition. Frequently, FPGA I/O standards are defined by the components the FPGA is connected “to” and not by any internal FPGA definition. With the FPGA interface definition the PCB design team may create a nearly optimal set of FPGA pin assignment constraints.

Once these “near-perfect” pin constraints are handed off, the FPGA team may proceed with actual design closure. One major shortcoming in a typical FPGA flow is that RTL synthesis does not take physical placement, routing resources, or pin assignments into account, forcing P&R to churn on a sub-optimal gate-level netlist to route logic signals to the correct device pins.

FPGA Optimization and “Pin-Awareness”

On the other hand, when early physical FPGA synthesis is performed before in-chip P&R, results can be significantly improved. A physical synthesis flow takes physical characteristics of the device and pin assignments into account and hence has a better chance at achieving design closure for a heavily pin-constrained design. At an early stage, logic blocks are optimized not only in terms of their estimated routing resources and estimated placement on the device, but also in terms of signals associated to device pins. Physical synthesis performs a series of physical optimizations such as retiming, register replication, and re-synthesis to improve timing of the netlist all while taking into account clock and I/O constraints, providing a “pin-aware” netlist. This lightens the load for P&R, allowing for shorter P&R run-times and shorter time to FPGA design closure according to the estimated pin assignments.

Once the FPGA I/O is optimized for the PCB and FPGA, the pin constraints are locked down. Subsequent changes in pin assignments will be minimal, allowing for a quick iteration on the FPGA side with minor adjustments. The result is a shorter FPGA-to-PCB feedback loop and ultimately a faster path to meeting cost and system performance requirements.

End-to-End Flow Intelligence

Solutions for the FPGA-to-PCB connection have matured in recent years, improving the board integration cycle time. Unfortunately, design complexity, high pin-count, and time-to-market requirements continue to challenge even the latest methodologies.  For further improvement, next-generation solutions require intelligence at both ends of the flow as well an ability to exchange data. Automatic multi-FPGA I/O optimization communicating with pin-aware physical synthesis is a step towards such a unified FPGA-to-board methodology.


[1] Techfocus Media, Inc., November 2005

Leave a Reply

featured blogs
Sep 28, 2022
Learn how our acquisition of FishTail Design Automation unifies end-to-end timing constraints generation and verification during the chip design process. The post Synopsys Acquires FishTail Design Automation, Unifying Constraints Handling for Enhanced Chip Design Process app...
Sep 28, 2022
You might think that hearing aids are a bit of a sleepy backwater. Indeed, the only time I can remember coming across them in my job at Cadence was at a CadenceLIVE Europe presentation that I never blogged about, or if I did, it was such a passing reference that Google cannot...
Sep 22, 2022
On Monday 26 September 2022, Earth and Jupiter will be only 365 million miles apart, which is around half of their worst-case separation....

featured video

Embracing Photonics and Fiber Optics in Aerospace and Defense Applications

Sponsored by Synopsys

We sat down with Jigesh Patel, Technical Marketing Manager of Photonic Solutions at Synopsys, to learn the challenges and benefits of using photonics in Aerospace and Defense systems.

Read the Interview online

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Enabling Digital Transformation in Electronic Design with Cadence Cloud

Sponsored by Cadence Design Systems

With increasing design sizes, complexity of advanced nodes, and faster time to market requirements - design teams are looking for scalability, simplicity, flexibility and agility. In today’s Chalk Talk, Amelia Dalton chats with Mahesh Turaga about the details of Cadence’s end to end cloud portfolio, how you can extend your on-prem environment with the push of a button with Cadence’s new hybrid cloud and Cadence’s Cloud solutions you can help you from design creation to systems design and more.

Click here for more information about Cadence Cloud Portfolio