feature article
Subscribe Now

From Pinout to Layout:

The FPGA/PCB Balancing Act

When a fifth of all designers say board integration is the biggest trouble spot in getting their product out the door[1], one has to conclude that the FPGA/PCB co-design dilemma remains unsolved for many. Either the FPGA designers continue to make pin assignments that simplify their own design closure goals but complicate those in the PCB domain, or the PCB team locks down pins early in the design cycle only to complicate design closure in the FPGA domain.

Though technology solutions have been available to facilitate pin assignment closure, bridging the FPGA and PCB gap requires collaborative techniques that do not demand that teams adopt new design behaviors. To minimize the iterative process from impacting time-to-market goals and risking product revenue, technologies need to be enhanced in multiple areas of the flow.

One such area is at the FPGA level, where the FPGA team is tasked with RTL development, IP integration, verification, and implementation via synthesis and in-chip place-and-route (P&R). It is the synthesis and P&R stage that is typically impacted when board integration becomes an issue. In an initial iteration, an FPGA designer usually assign pins to clock signals or specific high-speed interface signals, but other than that, the FPGA designer’s preference is to leave the remaining pin assignments to P&R. Even with this flexibility, design closure goals, such as timing and area, may not be met in a single pass, forcing a long trial-and-error process where different optimizations are explored. Today’s largest FPGAs may take many hours for a single synthesis and P&R run, hence several iterations can easily translate into a project delay.

To make matters worse, after all this effort, the PCB team may demand another round of iterations because the pinout assumptions made by the FPGA team have increased the number of PCB signal layers required to complete routing thereby increasing manufacturing costs while simultaneously reducing product reliability.

Escalating Pin Counts Add Up to Cross-Domain Complexity

Modern product designs leveraging standard FPGA capabilities span the megahertz to gigahertz operating frequencies while simultaneously mandating more expensive High Density Interconnect (HDI) manufacturing processes. A FPGA design with as few as 100 signal pins has potentially 9 x 10157 possible I/O assignments. Not all of these pin assignments are “legal” and while the flexibility of FPGA I/O may be leveraged to shorten PCB trace lengths and meet board timing constraints, most PCB designers do not possess the FPGA device expertise to differentiate between “legal” and “illegal” pin assignments. Yet we have FPGA devices that can support over one thousand signal pins, resulting in a productivity quagmire when the FPGA is integrated onto the printed circuit board design. “Time to Market” is not the only business objective to suffer due to cross-domain complexity. FPGAs introduce several challenges into the system design process. Product reliability and manufacturing costs are put at risk, as is the ability to meet system requirements.

Attempting to leverage a traditional system prototype/test/ re-spin design creation process does not yield a convergent process due to the enormous number of possible solutions. A “bad” FPGA pin assignment will result in an increase in PCB signal layers required to route the board, will multiply the number of PCB vias required, and will yield PCB trace lengths that preclude meeting system timing constraints. PCB vias are a source of mechanically-induced electrical failures that degrade product reliability, creating customer satisfaction distress. The challenges of FPGA/ PCB co-design are exacerbated by multi-FPGA PCB design.

Two Domains Seeking a Balance

So… how to simplify the process, shorten the feedback loop, and improve project schedule? The answer lies at both ends of the flow. Both on-chip (FPGA) and off-chip (PCB) technologies need to address the interdependencies.

In an efficient flow, FPGA development is done in parallel with PCB development. Initial FPGA synthesis and P&R runs should be performed to iron out design issues that may be unrelated to pin assignments. However, the sooner the FPGA team receives a reasonable estimate of pin locations the better.

The PCB System Design process does not require a completed FPGA implementation to begin (contrary to popular belief). All that is required is an FPGA signal interface definition. Frequently, FPGA I/O standards are defined by the components the FPGA is connected “to” and not by any internal FPGA definition. With the FPGA interface definition the PCB design team may create a nearly optimal set of FPGA pin assignment constraints.

Once these “near-perfect” pin constraints are handed off, the FPGA team may proceed with actual design closure. One major shortcoming in a typical FPGA flow is that RTL synthesis does not take physical placement, routing resources, or pin assignments into account, forcing P&R to churn on a sub-optimal gate-level netlist to route logic signals to the correct device pins.

FPGA Optimization and “Pin-Awareness”

On the other hand, when early physical FPGA synthesis is performed before in-chip P&R, results can be significantly improved. A physical synthesis flow takes physical characteristics of the device and pin assignments into account and hence has a better chance at achieving design closure for a heavily pin-constrained design. At an early stage, logic blocks are optimized not only in terms of their estimated routing resources and estimated placement on the device, but also in terms of signals associated to device pins. Physical synthesis performs a series of physical optimizations such as retiming, register replication, and re-synthesis to improve timing of the netlist all while taking into account clock and I/O constraints, providing a “pin-aware” netlist. This lightens the load for P&R, allowing for shorter P&R run-times and shorter time to FPGA design closure according to the estimated pin assignments.

Once the FPGA I/O is optimized for the PCB and FPGA, the pin constraints are locked down. Subsequent changes in pin assignments will be minimal, allowing for a quick iteration on the FPGA side with minor adjustments. The result is a shorter FPGA-to-PCB feedback loop and ultimately a faster path to meeting cost and system performance requirements.

End-to-End Flow Intelligence

Solutions for the FPGA-to-PCB connection have matured in recent years, improving the board integration cycle time. Unfortunately, design complexity, high pin-count, and time-to-market requirements continue to challenge even the latest methodologies.  For further improvement, next-generation solutions require intelligence at both ends of the flow as well an ability to exchange data. Automatic multi-FPGA I/O optimization communicating with pin-aware physical synthesis is a step towards such a unified FPGA-to-board methodology.


[1] Techfocus Media, Inc., November 2005

Leave a Reply

featured blogs
Jun 9, 2023
In this Knowledge Booster blog, let us talk about the simulation of the circuits based on switched capacitors and capacitance-to-voltage (C2V) converters using various analyses available under the Shooting Newton method using Spectre RF. The videos described in this blog are ...
Jun 8, 2023
Learn how our EDA tools accelerate 5G SoC design for customer Viettel, who designs chips for 5G base stations and drives 5G rollout across Vietnam. The post Customer Spotlight: Viettel Accelerates Design of Its First 5G SoC with Synopsys ASIP Designer appeared first on New H...
Jun 2, 2023
I just heard something that really gave me pause for thought -- the fact that everyone experiences two forms of death (given a choice, I'd rather not experience even one)....

featured video

The Role of Artificial Intelligence and Machine Learning in Electronic Design

Sponsored by Cadence Design Systems

In this video, we talk to Paul Cunningham, Senior VP and GM at Cadence, about the transformative role of artificial intelligence and machine learning (AI/ML) in electronic designs. We discuss the transformative period we are experiencing with AI and ML and how Cadence is revolutionizing how we design and verify chips through “computationalizing intuition” and building intuitive systems that learn and adapt to the world around them. With human lives at stake, reliability, and safety are paramount.

Learn More

featured paper

EC Solver Tech Brief

Sponsored by Cadence Design Systems

The Cadence® Celsius™ EC Solver supports electronics system designers in managing the most challenging thermal/electronic cooling problems quickly and accurately. By utilizing a powerful computational engine and meshing technology, designers can model and analyze the fluid flow and heat transfer of even the most complex electronic system and ensure the electronic cooling system is reliable.

Click to read more

featured chalk talk

Enable Sustainable Enterprises of the Future
Did you know that buildings are responsible for 40% of global energy consumption and 33% of greenhouse gas emissions? One way we can help both modernize and increase sustainability in our buildings is by adding 10BASE-T1L to our building controllers. In this episode of Chalk Talk, Amelia Dalton chats with Salem Gharbi from Analog Devices about how we can enable sustainable enterprises with ethernet connected building controllers. They examine the10BASE-T1L flexible design solutions that Analog Devices offers, how exiting?building infrastructure can take advantage of 10BASE-T1L and how you can get started on your next sustainable enterprise journey.
Dec 20, 2022
22,337 views