feature article
Subscribe Now

From Pinout to Layout:

The FPGA/PCB Balancing Act

When a fifth of all designers say board integration is the biggest trouble spot in getting their product out the door[1], one has to conclude that the FPGA/PCB co-design dilemma remains unsolved for many. Either the FPGA designers continue to make pin assignments that simplify their own design closure goals but complicate those in the PCB domain, or the PCB team locks down pins early in the design cycle only to complicate design closure in the FPGA domain.

Though technology solutions have been available to facilitate pin assignment closure, bridging the FPGA and PCB gap requires collaborative techniques that do not demand that teams adopt new design behaviors. To minimize the iterative process from impacting time-to-market goals and risking product revenue, technologies need to be enhanced in multiple areas of the flow.

One such area is at the FPGA level, where the FPGA team is tasked with RTL development, IP integration, verification, and implementation via synthesis and in-chip place-and-route (P&R). It is the synthesis and P&R stage that is typically impacted when board integration becomes an issue. In an initial iteration, an FPGA designer usually assign pins to clock signals or specific high-speed interface signals, but other than that, the FPGA designer’s preference is to leave the remaining pin assignments to P&R. Even with this flexibility, design closure goals, such as timing and area, may not be met in a single pass, forcing a long trial-and-error process where different optimizations are explored. Today’s largest FPGAs may take many hours for a single synthesis and P&R run, hence several iterations can easily translate into a project delay.

To make matters worse, after all this effort, the PCB team may demand another round of iterations because the pinout assumptions made by the FPGA team have increased the number of PCB signal layers required to complete routing thereby increasing manufacturing costs while simultaneously reducing product reliability.

Escalating Pin Counts Add Up to Cross-Domain Complexity

Modern product designs leveraging standard FPGA capabilities span the megahertz to gigahertz operating frequencies while simultaneously mandating more expensive High Density Interconnect (HDI) manufacturing processes. A FPGA design with as few as 100 signal pins has potentially 9 x 10157 possible I/O assignments. Not all of these pin assignments are “legal” and while the flexibility of FPGA I/O may be leveraged to shorten PCB trace lengths and meet board timing constraints, most PCB designers do not possess the FPGA device expertise to differentiate between “legal” and “illegal” pin assignments. Yet we have FPGA devices that can support over one thousand signal pins, resulting in a productivity quagmire when the FPGA is integrated onto the printed circuit board design. “Time to Market” is not the only business objective to suffer due to cross-domain complexity. FPGAs introduce several challenges into the system design process. Product reliability and manufacturing costs are put at risk, as is the ability to meet system requirements.

Attempting to leverage a traditional system prototype/test/ re-spin design creation process does not yield a convergent process due to the enormous number of possible solutions. A “bad” FPGA pin assignment will result in an increase in PCB signal layers required to route the board, will multiply the number of PCB vias required, and will yield PCB trace lengths that preclude meeting system timing constraints. PCB vias are a source of mechanically-induced electrical failures that degrade product reliability, creating customer satisfaction distress. The challenges of FPGA/ PCB co-design are exacerbated by multi-FPGA PCB design.

Two Domains Seeking a Balance

So… how to simplify the process, shorten the feedback loop, and improve project schedule? The answer lies at both ends of the flow. Both on-chip (FPGA) and off-chip (PCB) technologies need to address the interdependencies.

In an efficient flow, FPGA development is done in parallel with PCB development. Initial FPGA synthesis and P&R runs should be performed to iron out design issues that may be unrelated to pin assignments. However, the sooner the FPGA team receives a reasonable estimate of pin locations the better.

The PCB System Design process does not require a completed FPGA implementation to begin (contrary to popular belief). All that is required is an FPGA signal interface definition. Frequently, FPGA I/O standards are defined by the components the FPGA is connected “to” and not by any internal FPGA definition. With the FPGA interface definition the PCB design team may create a nearly optimal set of FPGA pin assignment constraints.

Once these “near-perfect” pin constraints are handed off, the FPGA team may proceed with actual design closure. One major shortcoming in a typical FPGA flow is that RTL synthesis does not take physical placement, routing resources, or pin assignments into account, forcing P&R to churn on a sub-optimal gate-level netlist to route logic signals to the correct device pins.

FPGA Optimization and “Pin-Awareness”

On the other hand, when early physical FPGA synthesis is performed before in-chip P&R, results can be significantly improved. A physical synthesis flow takes physical characteristics of the device and pin assignments into account and hence has a better chance at achieving design closure for a heavily pin-constrained design. At an early stage, logic blocks are optimized not only in terms of their estimated routing resources and estimated placement on the device, but also in terms of signals associated to device pins. Physical synthesis performs a series of physical optimizations such as retiming, register replication, and re-synthesis to improve timing of the netlist all while taking into account clock and I/O constraints, providing a “pin-aware” netlist. This lightens the load for P&R, allowing for shorter P&R run-times and shorter time to FPGA design closure according to the estimated pin assignments.

Once the FPGA I/O is optimized for the PCB and FPGA, the pin constraints are locked down. Subsequent changes in pin assignments will be minimal, allowing for a quick iteration on the FPGA side with minor adjustments. The result is a shorter FPGA-to-PCB feedback loop and ultimately a faster path to meeting cost and system performance requirements.

End-to-End Flow Intelligence

Solutions for the FPGA-to-PCB connection have matured in recent years, improving the board integration cycle time. Unfortunately, design complexity, high pin-count, and time-to-market requirements continue to challenge even the latest methodologies.  For further improvement, next-generation solutions require intelligence at both ends of the flow as well an ability to exchange data. Automatic multi-FPGA I/O optimization communicating with pin-aware physical synthesis is a step towards such a unified FPGA-to-board methodology.


[1] Techfocus Media, Inc., November 2005

Leave a Reply

featured blogs
May 21, 2022
May is Asian American and Pacific Islander (AAPI) Heritage Month. We would like to spotlight some of our incredible AAPI-identifying employees to celebrate. We recognize the important influence that... ...
May 20, 2022
I'm very happy with my new OMTech 40W CO2 laser engraver/cutter, but only because the folks from Makers Local 256 helped me get it up and running....
May 19, 2022
Learn about the AI chip design breakthroughs and case studies discussed at SNUG Silicon Valley 2022, including autonomous PPA optimization using DSO.ai. The post Key Highlights from SNUG 2022: AI Is Fast Forwarding Chip Design appeared first on From Silicon To Software....
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...

featured video

Building safer robots with computer vision & AI

Sponsored by Texas Instruments

Watch TI's demo to see how Jacinto™ 7 processors fuse deep learning and traditional computer vision to enable safer autonomous mobile robots.

Watch demo

featured paper

5 common Hall-effect sensor myths

Sponsored by Texas Instruments

Hall-effect sensors can be used in a variety of automotive and industrial systems. Higher system performance requirements created the need for improved accuracy and more integration – extending the use of Hall-effect sensors. Read this article to learn about common Hall-effect sensor misconceptions and see how these sensors can be used in real-world applications.

Click to read more

featured chalk talk

Enabling Digital Transformation in Electronic Design with Cadence Cloud

Sponsored by Cadence Design Systems

With increasing design sizes, complexity of advanced nodes, and faster time to market requirements - design teams are looking for scalability, simplicity, flexibility and agility. In today’s Chalk Talk, Amelia Dalton chats with Mahesh Turaga about the details of Cadence’s end to end cloud portfolio, how you can extend your on-prem environment with the push of a button with Cadence’s new hybrid cloud and Cadence’s Cloud solutions you can help you from design creation to systems design and more.

Click here for more information about Cadence Cloud Portfolio