feature article
Subscribe Now

Employing an I/O Interlocutor

FMCs Decouple FPGAs from Complex I/Os

It used to be so simple. PLDs provided a medium by which you could create and modify logic without having to make any board changes. All the variability was on the inside; the outside consisted of I/Os, and, back in the day, that meant one thing: TTL. Eventually, when CMOS became more prevalent, the less-often-used rail-to-rail CMOS interface was available, but those I/Os were on different devices that were dedicated to the CMOS low-power market.

Well, the first hints that the age of innocence was coming to an end appeared with the lowering of the power supply voltage from 5 V to 3.3 V. This was mostly managed through more careful I/O design so that, if possible, a 3.3-V I/O could tolerate 5-V signals when it was connecting to a device that was still on a 5-V supply. Yeah… remember when we scratched our heads wondering how one would manage two – count them, TWO! – supplies on a board? No longer could we ignore the I/O and simply focus changes on the internals of the PLD. The I/O now became part of the design work.

Obviously, I/Os have come a long way since then.  In terms of low-level characteristics, I/Os on PLDs – and here, of course, we’re principally talking about FPGAs – have undergone a number of changes.

  • The lowering of supply voltage wasn’t just one watershed event; it became a regular occurrence as oxide thicknesses on chips continued to get thinner, reducing the strength of the fields they could support without breaking down. (Remember when tunnel oxide was the carefully-controlled thin stuff? Now it’s the carefully-controlled thick stuff.)
  • Various terminated single-ended standards evolved – SSTL, GTL, etc., bringing I/Os further into the analog realm.
  • Differential I/Os – primarily LVDS – were added, doubling the number of pins required for such I/Os
  • Double-clocked capability was added, largely to support DDR memories.
  • Circuitry to embed clock information in signals was added to reduce the impact of skew in high-speed signals.

The actual number of physical variations here is nominal, but they have been carefully chosen to support a large number of standards. The standards typically involve much more than the physical attributes of the signaling, requiring higher layers to package and encode data. A quick look at various FPGA vendors’ websites provides the following (probably incomplete) list of standards supported:

PCI
PCI-X
PCI Express
CompactPCI
RapidIO
Serial RapidIO
SerialLite
Aurora
Interlaken
XAUI
SGMII
XSBI
CEI-6G
HiGig
NPSI
CSIX-L1
GPON
SFI-5
SFI-4
SPI-4.2
SDH/SONET
POS-PHY L4
UTOPIA IV
CPRI
OBSAI
FibreChannel
HyperTransport
OC-48
SDI
HD-SDI
JESD204
SATA
CAN
MOST
SDR
DDR
ZBT
QDR
QDRII
RLDRAM
TTL
LVTTL (3.3, 2.5, 1.8)
LVCMOS (3.3, 2.5, 1.8)
GTL (3.3, 2.5)
GTL+ (3.3, 2.5)
SSTL 3 (I & II)
SSTL 2 (I & II)
HSTL (I & II)
1x AGP
2x AGP
CTT

So now we have this explosion of different I/O requirements. The next complication is the fact that, unlike their naïve simplistic forebears, today’s FPGAs aren’t uniform internally. Choice of I/O is no longer orthogonal to placement of logic. So even though a particular set of I/O standards may be theoretically possible in a given device, that’s no guarantee that a particular arrangement of logic blocks requiring a combination of I/O types laid out in a manner that can be routed on a board will be possible. As a potential customer of a device, you may want to do some playing with one or two vendors’ chips before committing to a design or to a vendor, both for validating the quality of the signaling and for ensuring the likelihood of a fit.

Do they really work?

Remember when you could buy prototyping “boards” into which you could simply plug chips and wires in order to try stuff out? Am I sounding really old here? Well, those days are long gone; if you really want to test this stuff out yourself, you have to design an umpteen-layer board to do so. If you cut corners, compromising the quality of signaling, then you might incorrectly decide that the I/O drivers were somehow inadequate. The level of effort required makes building your own evaluation boards pretty impractical. So what’s the answer? Easy. Require the vendor to provide evaluation boards so that you – and anyone else wanting to do this – can check out the board with minimal effort and much less cost.

Now place yourself in the shoes of an FPGA maker. You’ve got this nifty new set of I/O standards for which your customer base has been clamoring. But your customers are skeptical… trust but verify… these are not simple standards, even at the electrical level. Some engineers are gonna want to peek at the eye openings to see how wide and clean they are. Others are going to want to run some environmental tests. Still others will want to implement full standards to ensure that they can in fact operate at the performance level required.

So an FPGA vendor will want to provide a way to prove out their I/O skills by building an eval board. Simple, right? If your answer was “Yes,” you need to look back up at that list up there. Imagine one board intended to demonstrate all of those standards. The internal IP required isn’t really the issue; the prospective customers will simply implement the logic for whichever standard they’re interested in. But I/Os have to be available for all of them, and in order to test things out, you probably need something more on the board. For instance, when testing out memory interfaces, it’s useful to have actual memories being driven.

I’m pretty sure no board has been designed to handle everything above, but there have been boards designed to handle many of them. Big boards. Or, perhaps more to the point, expensive boards. This can get into five digits easily. But who wants to pay five digits to evaluate a dumb ol’ chip??

Setting evaluation aside for a second, there is a separate trend for companies to use commercial off-the-shelf (COTS) boards where possible, minimizing the design effort and hopefully sharing the cost of the design of the board with all the other customers using that same board. This is particularly prevalent with computing platforms, since the bulk of the actual system development work can then be software rather than hardware design. But such boards also exist where FPGAs feature as the key locus of innovation for users of these boards. You buy a board with one or more FPGAs plus other ancillary goodies, and your design task is “limited” to configuring the FPGA with your value-added widgetry.

Now picture yourself as a COTS vendor. You would like a board to be useful as broadly as possible. But there’s the issue of those pesky I/Os – which ones should be supported? This has enormous implications for product line management, since what should be a single high-volume board could fracture into a dozen flavors of varying popularity.

A glimmer of help

There is help on the horizon for these problems, although the motivation for this white knight is actually being positioned somewhat differently from that which has just been described – we’ll get to that in a minute. The rescue is coming in the form of the so-called FMC (no, for you old-timers, no food machinery here – FPGA Mezzanine Card). The I/Os are being decoupled from the rest of the board, adding to the dazzling array of small form factor and mezzanine cards available.

The official name of the standard is VITA 57, and, in fact, it’s not yet a done deal. But it is close enough that VMETRO is releasing the first commercially available FMC, focused on the analog/digital interface and largely targeting such defense-oriented applications as signal intelligence, electronic counter-measures, and radar. VMETRO seems to have most of the information on the web about this standard; everyone else is pretty much mum. Xilinx also apparently had a hand in developing the standard (it would be interesting to see how many standards have Xilinx fingerprints on them… I suspect there’s quite a few.)

FMCs are intended to be usable on a wide variety of form factors; listed examples include VME (of course), CompactPCI, CompactPCI Express, VXS, VPX, VPX-REDI, ATCA, and AMC. They are available in single- and double-widths, allowing more or less room for goodies on the FMC itself. There are also two connector sizes supported: 160 pins and 400 pins; a 160-pin connector can mate with a 400-pin connector, reducing the risk of mismatch. Both single and differential signaling are supported, and clocking rates of 1 GHz are supported. The actual bandwidth (and, in fact, clock rate) will vary not only by the number of pins used, but by the actual chips on the FMC (or the endpoints that the FMC drives).

The rationale for such a standard is largely being positioned as benefiting re-use: a design intended for one I/O can be largely retargeted to a different set of I/Os without requiring a board change: minimal changes in the FPGA and a swap-out of the FMC will be all that’s required. This is, of course, true, but it tweaks that thing in my gut that questions how much re-use really happens. If re-use were the true raison d’être for the FMC, then there would seem to be reasonable risk of the FMC being yet another one of those standards that sounds good but ends up languishing alongside numerous other well-intentioned brethren that never got traction.

But looked at differently, and hinted at slightly in the literature, the real news here is the benefit that this provides to makers of COTS boards like VMETRO. If this can simplify the product-mix issue for them, and if that improves the economics and practicality of using COTS boards, then rather than this being a re-use benefit, it becomes one essentially of IP. As a system designer, you no longer have to design your own board – you can buy “board IP” and focus your attention on your own unique algorithms. You then buy (or in the worst case, design) an FMC to connect you to the rest of the world.

In the same manner, it becomes much easier for FPGA vendors to provide evaluation materials for new chips. The evaluation board itself can focus on the new FPGA, and then various FMCs can be made available (either by the vendor or by commercial FMC suppliers) to handle the different I/O configurations that might be required. Each prospective customer can focus on the I/O standards of interest and not be burdened with all of the other possibilities.

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Autonomous Mobile Robots
Sponsored by Mouser Electronics and onsemi
Robotic applications are now commonplace in a variety of segments in society and are growing in number each day. In this episode of Chalk Talk, Amelia Dalton and Alessandro Maggioni from onsemi discuss the details, functions, and benefits of autonomous mobile robots. They also examine the performance parameters of these kinds of robotic designs, the five main subsystems included in autonomous mobile robots, and how onsemi is furthering innovation in this arena.
Jan 24, 2024
13,386 views