feature article
Subscribe Now

From Pinout to Layout:

The FPGA/PCB Balancing Act

When a fifth of all designers say board integration is the biggest trouble spot in getting their product out the door[1], one has to conclude that the FPGA/PCB co-design dilemma remains unsolved for many. Either the FPGA designers continue to make pin assignments that simplify their own design closure goals but complicate those in the PCB domain, or the PCB team locks down pins early in the design cycle only to complicate design closure in the FPGA domain.

Though technology solutions have been available to facilitate pin assignment closure, bridging the FPGA and PCB gap requires collaborative techniques that do not demand that teams adopt new design behaviors. To minimize the iterative process from impacting time-to-market goals and risking product revenue, technologies need to be enhanced in multiple areas of the flow.

One such area is at the FPGA level, where the FPGA team is tasked with RTL development, IP integration, verification, and implementation via synthesis and in-chip place-and-route (P&R). It is the synthesis and P&R stage that is typically impacted when board integration becomes an issue. In an initial iteration, an FPGA designer usually assign pins to clock signals or specific high-speed interface signals, but other than that, the FPGA designer’s preference is to leave the remaining pin assignments to P&R. Even with this flexibility, design closure goals, such as timing and area, may not be met in a single pass, forcing a long trial-and-error process where different optimizations are explored. Today’s largest FPGAs may take many hours for a single synthesis and P&R run, hence several iterations can easily translate into a project delay.

To make matters worse, after all this effort, the PCB team may demand another round of iterations because the pinout assumptions made by the FPGA team have increased the number of PCB signal layers required to complete routing thereby increasing manufacturing costs while simultaneously reducing product reliability.

Escalating Pin Counts Add Up to Cross-Domain Complexity

Modern product designs leveraging standard FPGA capabilities span the megahertz to gigahertz operating frequencies while simultaneously mandating more expensive High Density Interconnect (HDI) manufacturing processes. A FPGA design with as few as 100 signal pins has potentially 9 x 10157 possible I/O assignments. Not all of these pin assignments are “legal” and while the flexibility of FPGA I/O may be leveraged to shorten PCB trace lengths and meet board timing constraints, most PCB designers do not possess the FPGA device expertise to differentiate between “legal” and “illegal” pin assignments. Yet we have FPGA devices that can support over one thousand signal pins, resulting in a productivity quagmire when the FPGA is integrated onto the printed circuit board design. “Time to Market” is not the only business objective to suffer due to cross-domain complexity. FPGAs introduce several challenges into the system design process. Product reliability and manufacturing costs are put at risk, as is the ability to meet system requirements.

Attempting to leverage a traditional system prototype/test/ re-spin design creation process does not yield a convergent process due to the enormous number of possible solutions. A “bad” FPGA pin assignment will result in an increase in PCB signal layers required to route the board, will multiply the number of PCB vias required, and will yield PCB trace lengths that preclude meeting system timing constraints. PCB vias are a source of mechanically-induced electrical failures that degrade product reliability, creating customer satisfaction distress. The challenges of FPGA/ PCB co-design are exacerbated by multi-FPGA PCB design.

Two Domains Seeking a Balance

So… how to simplify the process, shorten the feedback loop, and improve project schedule? The answer lies at both ends of the flow. Both on-chip (FPGA) and off-chip (PCB) technologies need to address the interdependencies.

In an efficient flow, FPGA development is done in parallel with PCB development. Initial FPGA synthesis and P&R runs should be performed to iron out design issues that may be unrelated to pin assignments. However, the sooner the FPGA team receives a reasonable estimate of pin locations the better.

The PCB System Design process does not require a completed FPGA implementation to begin (contrary to popular belief). All that is required is an FPGA signal interface definition. Frequently, FPGA I/O standards are defined by the components the FPGA is connected “to” and not by any internal FPGA definition. With the FPGA interface definition the PCB design team may create a nearly optimal set of FPGA pin assignment constraints.

Once these “near-perfect” pin constraints are handed off, the FPGA team may proceed with actual design closure. One major shortcoming in a typical FPGA flow is that RTL synthesis does not take physical placement, routing resources, or pin assignments into account, forcing P&R to churn on a sub-optimal gate-level netlist to route logic signals to the correct device pins.

FPGA Optimization and “Pin-Awareness”

On the other hand, when early physical FPGA synthesis is performed before in-chip P&R, results can be significantly improved. A physical synthesis flow takes physical characteristics of the device and pin assignments into account and hence has a better chance at achieving design closure for a heavily pin-constrained design. At an early stage, logic blocks are optimized not only in terms of their estimated routing resources and estimated placement on the device, but also in terms of signals associated to device pins. Physical synthesis performs a series of physical optimizations such as retiming, register replication, and re-synthesis to improve timing of the netlist all while taking into account clock and I/O constraints, providing a “pin-aware” netlist. This lightens the load for P&R, allowing for shorter P&R run-times and shorter time to FPGA design closure according to the estimated pin assignments.

Once the FPGA I/O is optimized for the PCB and FPGA, the pin constraints are locked down. Subsequent changes in pin assignments will be minimal, allowing for a quick iteration on the FPGA side with minor adjustments. The result is a shorter FPGA-to-PCB feedback loop and ultimately a faster path to meeting cost and system performance requirements.

End-to-End Flow Intelligence

Solutions for the FPGA-to-PCB connection have matured in recent years, improving the board integration cycle time. Unfortunately, design complexity, high pin-count, and time-to-market requirements continue to challenge even the latest methodologies.  For further improvement, next-generation solutions require intelligence at both ends of the flow as well an ability to exchange data. Automatic multi-FPGA I/O optimization communicating with pin-aware physical synthesis is a step towards such a unified FPGA-to-board methodology.

[1] Techfocus Media, Inc., November 2005

Leave a Reply

featured blogs
Jan 26, 2023
By Slava Zhuchenya Software migration can be a dreaded endeavor, especially for electronic design automation (EDA) tools that design companies… ...
Jan 26, 2023
Are you experienced in using SVA? It's been around for a long time, and it's tempting to think there's nothing new to learn. Have you ever come across situations where SVA can't solve what appears to be a simple problem? What if you wanted to code an assertion that a signal r...
Jan 24, 2023
We explain embedded magnetoresistive random access memory (eMRAM) and its low-power SoC design applications as a non-volatile memory alternative to SRAM & Flash. The post Why Embedded MRAMs Are the Future for Advanced-Node SoCs appeared first on From Silicon To Software...
Jan 19, 2023
Are you having problems adjusting your watch strap or swapping out your watch battery? If so, I am the bearer of glad tidings....

featured video

Synopsys 224G & 112G Ethernet PHY IP OIF Interop at ECOC 2022

Sponsored by Synopsys

This Featured Video shows four demonstrations of the Synopsys 224G and 112G Ethernet PHY IP long and medium reach performance, interoperating with third-party channels and SerDes.

Learn More

featured chalk talk

"Scalable Power Delivery" for High-Performance ASICs, SoCs, and xPUs

Sponsored by Infineon

Today’s AI and Networking applications are driving an exponential increase in compute power. When it comes to scaling power for these kinds of applications with next generation chipsets, we need to keep in mind package size constraints, dynamic current balancing, and output capacitance. In this episode of Chalk Talk, Mark Rodrigues from Infineon joins Amelia Dalton to discuss the system design challenges with increasing power density for next generation chipsets, the benefits that phase paralleling brings to the table, and why Infineon’s best in class transient performance with XDP architecture and Trans Inductor Voltage Regulator can help power  your next high performance ASIC, SoC or xPU design.

Click here for more information about computing and data storage from Infineon