feature article
Subscribe Now

Employing an I/O Interlocutor

FMCs Decouple FPGAs from Complex I/Os

It used to be so simple. PLDs provided a medium by which you could create and modify logic without having to make any board changes. All the variability was on the inside; the outside consisted of I/Os, and, back in the day, that meant one thing: TTL. Eventually, when CMOS became more prevalent, the less-often-used rail-to-rail CMOS interface was available, but those I/Os were on different devices that were dedicated to the CMOS low-power market.

Well, the first hints that the age of innocence was coming to an end appeared with the lowering of the power supply voltage from 5 V to 3.3 V. This was mostly managed through more careful I/O design so that, if possible, a 3.3-V I/O could tolerate 5-V signals when it was connecting to a device that was still on a 5-V supply. Yeah… remember when we scratched our heads wondering how one would manage two – count them, TWO! – supplies on a board? No longer could we ignore the I/O and simply focus changes on the internals of the PLD. The I/O now became part of the design work.

Obviously, I/Os have come a long way since then.  In terms of low-level characteristics, I/Os on PLDs – and here, of course, we’re principally talking about FPGAs – have undergone a number of changes.

  • The lowering of supply voltage wasn’t just one watershed event; it became a regular occurrence as oxide thicknesses on chips continued to get thinner, reducing the strength of the fields they could support without breaking down. (Remember when tunnel oxide was the carefully-controlled thin stuff? Now it’s the carefully-controlled thick stuff.)
  • Various terminated single-ended standards evolved – SSTL, GTL, etc., bringing I/Os further into the analog realm.
  • Differential I/Os – primarily LVDS – were added, doubling the number of pins required for such I/Os
  • Double-clocked capability was added, largely to support DDR memories.
  • Circuitry to embed clock information in signals was added to reduce the impact of skew in high-speed signals.

The actual number of physical variations here is nominal, but they have been carefully chosen to support a large number of standards. The standards typically involve much more than the physical attributes of the signaling, requiring higher layers to package and encode data. A quick look at various FPGA vendors’ websites provides the following (probably incomplete) list of standards supported:

PCI
PCI-X
PCI Express
CompactPCI
RapidIO
Serial RapidIO
SerialLite
Aurora
Interlaken
XAUI
SGMII
XSBI
CEI-6G
HiGig
NPSI
CSIX-L1
GPON
SFI-5
SFI-4
SPI-4.2
SDH/SONET
POS-PHY L4
UTOPIA IV
CPRI
OBSAI
FibreChannel
HyperTransport
OC-48
SDI
HD-SDI
JESD204
SATA
CAN
MOST
SDR
DDR
ZBT
QDR
QDRII
RLDRAM
TTL
LVTTL (3.3, 2.5, 1.8)
LVCMOS (3.3, 2.5, 1.8)
GTL (3.3, 2.5)
GTL+ (3.3, 2.5)
SSTL 3 (I & II)
SSTL 2 (I & II)
HSTL (I & II)
1x AGP
2x AGP
CTT

So now we have this explosion of different I/O requirements. The next complication is the fact that, unlike their naïve simplistic forebears, today’s FPGAs aren’t uniform internally. Choice of I/O is no longer orthogonal to placement of logic. So even though a particular set of I/O standards may be theoretically possible in a given device, that’s no guarantee that a particular arrangement of logic blocks requiring a combination of I/O types laid out in a manner that can be routed on a board will be possible. As a potential customer of a device, you may want to do some playing with one or two vendors’ chips before committing to a design or to a vendor, both for validating the quality of the signaling and for ensuring the likelihood of a fit.

Do they really work?

Remember when you could buy prototyping “boards” into which you could simply plug chips and wires in order to try stuff out? Am I sounding really old here? Well, those days are long gone; if you really want to test this stuff out yourself, you have to design an umpteen-layer board to do so. If you cut corners, compromising the quality of signaling, then you might incorrectly decide that the I/O drivers were somehow inadequate. The level of effort required makes building your own evaluation boards pretty impractical. So what’s the answer? Easy. Require the vendor to provide evaluation boards so that you – and anyone else wanting to do this – can check out the board with minimal effort and much less cost.

Now place yourself in the shoes of an FPGA maker. You’ve got this nifty new set of I/O standards for which your customer base has been clamoring. But your customers are skeptical… trust but verify… these are not simple standards, even at the electrical level. Some engineers are gonna want to peek at the eye openings to see how wide and clean they are. Others are going to want to run some environmental tests. Still others will want to implement full standards to ensure that they can in fact operate at the performance level required.

So an FPGA vendor will want to provide a way to prove out their I/O skills by building an eval board. Simple, right? If your answer was “Yes,” you need to look back up at that list up there. Imagine one board intended to demonstrate all of those standards. The internal IP required isn’t really the issue; the prospective customers will simply implement the logic for whichever standard they’re interested in. But I/Os have to be available for all of them, and in order to test things out, you probably need something more on the board. For instance, when testing out memory interfaces, it’s useful to have actual memories being driven.

I’m pretty sure no board has been designed to handle everything above, but there have been boards designed to handle many of them. Big boards. Or, perhaps more to the point, expensive boards. This can get into five digits easily. But who wants to pay five digits to evaluate a dumb ol’ chip??

Setting evaluation aside for a second, there is a separate trend for companies to use commercial off-the-shelf (COTS) boards where possible, minimizing the design effort and hopefully sharing the cost of the design of the board with all the other customers using that same board. This is particularly prevalent with computing platforms, since the bulk of the actual system development work can then be software rather than hardware design. But such boards also exist where FPGAs feature as the key locus of innovation for users of these boards. You buy a board with one or more FPGAs plus other ancillary goodies, and your design task is “limited” to configuring the FPGA with your value-added widgetry.

Now picture yourself as a COTS vendor. You would like a board to be useful as broadly as possible. But there’s the issue of those pesky I/Os – which ones should be supported? This has enormous implications for product line management, since what should be a single high-volume board could fracture into a dozen flavors of varying popularity.

A glimmer of help

There is help on the horizon for these problems, although the motivation for this white knight is actually being positioned somewhat differently from that which has just been described – we’ll get to that in a minute. The rescue is coming in the form of the so-called FMC (no, for you old-timers, no food machinery here – FPGA Mezzanine Card). The I/Os are being decoupled from the rest of the board, adding to the dazzling array of small form factor and mezzanine cards available.

The official name of the standard is VITA 57, and, in fact, it’s not yet a done deal. But it is close enough that VMETRO is releasing the first commercially available FMC, focused on the analog/digital interface and largely targeting such defense-oriented applications as signal intelligence, electronic counter-measures, and radar. VMETRO seems to have most of the information on the web about this standard; everyone else is pretty much mum. Xilinx also apparently had a hand in developing the standard (it would be interesting to see how many standards have Xilinx fingerprints on them… I suspect there’s quite a few.)

FMCs are intended to be usable on a wide variety of form factors; listed examples include VME (of course), CompactPCI, CompactPCI Express, VXS, VPX, VPX-REDI, ATCA, and AMC. They are available in single- and double-widths, allowing more or less room for goodies on the FMC itself. There are also two connector sizes supported: 160 pins and 400 pins; a 160-pin connector can mate with a 400-pin connector, reducing the risk of mismatch. Both single and differential signaling are supported, and clocking rates of 1 GHz are supported. The actual bandwidth (and, in fact, clock rate) will vary not only by the number of pins used, but by the actual chips on the FMC (or the endpoints that the FMC drives).

The rationale for such a standard is largely being positioned as benefiting re-use: a design intended for one I/O can be largely retargeted to a different set of I/Os without requiring a board change: minimal changes in the FPGA and a swap-out of the FMC will be all that’s required. This is, of course, true, but it tweaks that thing in my gut that questions how much re-use really happens. If re-use were the true raison d’être for the FMC, then there would seem to be reasonable risk of the FMC being yet another one of those standards that sounds good but ends up languishing alongside numerous other well-intentioned brethren that never got traction.

But looked at differently, and hinted at slightly in the literature, the real news here is the benefit that this provides to makers of COTS boards like VMETRO. If this can simplify the product-mix issue for them, and if that improves the economics and practicality of using COTS boards, then rather than this being a re-use benefit, it becomes one essentially of IP. As a system designer, you no longer have to design your own board – you can buy “board IP” and focus your attention on your own unique algorithms. You then buy (or in the worst case, design) an FMC to connect you to the rest of the world.

In the same manner, it becomes much easier for FPGA vendors to provide evaluation materials for new chips. The evaluation board itself can focus on the new FPGA, and then various FMCs can be made available (either by the vendor or by commercial FMC suppliers) to handle the different I/O configurations that might be required. Each prospective customer can focus on the I/O standards of interest and not be burdened with all of the other possibilities.

Leave a Reply

featured blogs
Apr 16, 2021
The Team RF "μWaveRiders" blog series is a showcase for Cadence AWR RF products. Monthly topics will vary between Cadence AWR Design Environment release highlights, feature videos, Cadence... [[ Click on the title to access the full blog on the Cadence Community...
Apr 16, 2021
Spring is in the air and summer is just around the corner. It is time to get out the Old Farmers Almanac and check on the planting schedule as you plan out your garden.  If you are unfamiliar with a Farmers Almanac, it is a publication containing weather forecasts, plantin...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Understanding the Foundations of Quiescent Current in Linear Power Systems

Sponsored by Texas Instruments

Minimizing power consumption is an important design consideration, especially in battery-powered systems that utilize linear regulators or low-dropout regulators (LDOs). Read this new whitepaper to learn the fundamentals of IQ in linear-power systems, how to predict behavior in dropout conditions, and maintain minimal disturbance during the load transient response.

Click here to download the whitepaper

Featured Chalk Talk

General Port Protection

Sponsored by Mouser Electronics and Littelfuse

In today’s complex designs, port protection can be a challenge. High-speed data, low-speed data, and power ports need protection from ESD, power faults, and more. In this episode of Chalk Talk, Amelia Dalton chats with Todd Phillips from Littelfuse about port protection for your next system design.

Click here for more information about port protection from Littelfuse.