feature article
Subscribe Now

Taming a Tenth of a Terabit

FPGAs in 100GbE Tester

When the bandwidth glut funnels its way from the bundle of “last-miles” into the big aggregators, switching our packets with ever-increasing density – we inevitably reach the point of the Big Pipe. The Big Pipe is always the limit of our technology – the most bits we can cram into a single cable so we can run them across the floor to another machine.

Every couple of years, the size of the Big Pipe increases – through some clever convolution of Moore’s Law.  Today, we sit on the threshold of 100 billion bits per second.  In Ethernet.  Now, we could reminisce about the days when some of us thought Ethernet had run its course – that we were about to approach the limit of what was possible with the venerable bundle of wires and fibers that 802.3 has brought us since the mid 1970s.

In one of the world’s most ironic chicken-and-egg games, you have to be able to test a standard before you can build it, and testing to a standard inevitably is a much more daunting design challenge than just building the thing in the first place.  Getting the performance and flexibility required to create test equipment for emerging standards like 100GbE is the classic design target for the very highest-end FPGAs.  When FPGA companies want to squeeze the last possible modicum of performance, density, and power efficiency out of the most cutting-edge process node, they inevitably are envisioning test equipment for the next Big Pipe.

When Altera first announced their 40nm Stratix IV FPGAs – long before they hit volume production, their main bragging points were — you guessed it — how well the new devices would hold up in 100GbE and 40GbE applications.  Every aspect of the new devices was defined in terms of these applications – how many transceivers would it take for various tasks, what power consumption was expected with the higher densities and frequencies involved, how much fabric and memory would these applications consume?  If the family could forge a new notch on the bench for these rare-air applications, everything else was demonstrably easy.

JDSU decided to take up Altera’s challenge.  JDSU’s just-announced 100-Gigabit Ethernet Test Suite puts some of the industry’s biggest, baddest FPGAs in the hot socket, seeing how they measure up against the Next Big Pipe.  The answer, apparently, is that the FPGAs did quite well.  JDSU designed in the Altera Stratix IV GT – a 40nm FPGA with built-in 11.3 Gbps SerDes transceivers.  The devices boast over 500K 4-input LUT equivalents (delivered as 200K Adaptive Logic Modules – essentially an 8-input fracturable LUT), 20 Mb of embedded memory, over 1K 18×18 hard-wired multipliers, and the centerpiece (which is ironically at the edge) – a collection of up to 48 multi-gigabit SerDes transceivers, 24 of which are 11.3 Gpbs capable.

JDSU didn’t know about Stratix IV when they started this project 3 years ago.  “We started about three years ago investigating next-generation network requirements – 40Gb and 100Gb,” says Johannes Becker, Marketing Director at JDSU.  “We collected feedback and requirements from customers and component suppliers.  Nobody knew which direction 100Gb would be going.  10X10Gb? 4X25Gb?”  Now, 100Gb is implemented with 4 lanes of 25Gb optics and 40Gb with 4 lanes of 10Gb optics.  

At the time, however, JDSU needed flexibility to begin designing equipment, even when the standard was up in the air.  They designed a daughterboard for transponders and worked closely with component suppliers like Altera to be sure the technology would be in place to support their design structure.  “We wanted to future-proof the solution,” explains Dietmar Tandler, R&D Director at JDSU.  “We learned a lot from our 40G project.  We designed the system so we could exchange only the transponders without altering the rest of the hardware.”

The “partnering” part of that is of significant value to FPGA companies.  The days when you could just add a bunch more LUTs, add 20% to the pin count, bump the SerDes up a notch, and announce a new family are long gone.  These days, FPGA companies have to engineer general-purpose parts for specific purposes.  They look at what they believe are key applications that need to be served by their next-generation family and then work with “friendly” design teams to be sure their newly-designed chips will have the features, capacity, and performance to handle those killer apps.  When the Venn diagram for these are all unionized, we hopefully get an FPGA that is capable of much more than the sum of its target applications – we get a general purpose part that really sings on the main objectives and can be flexed and stretched to accommodate a much wider gamut of problems. 

Given the enormous performance, power, flexibility, and functionality challenges, what made the design team lose sleep at night?  “Density was the big deal,” continues Tandler.  “We needed to avoid high-speed multiplexers.  We used large amounts of internal RAM on the FPGA, all of the capability of the high-speed serial, and we needed enough logic left over to complete the functionality.”  

For many of us, signal integrity would be the big fear in such a project – getting that many multi-gigabit transceivers to behave over a wide range of conditions at those frequencies sounds like it could end up being one of those “squeeze the balloon” problems where fixing one area just makes a new problem appear somewhere else.  “That part went very smoothly,” Tandler replies.  “We probably only spent about 2 weeks on SI tuning.”

Power, although a huge challenge for the FPGA company, didn’t really show up on radar for JDSU.  This is a major feather in the cap of the FPGA, because putting that much hardware, operating at these frequencies, all on one chip at 40nm and having power not be the prohibitive concern speaks to a good bit of engineering in the devices and in the design tools that support them.  With each new FPGA process node, we predict that power will be a major concern, and each time the FPGA companies manage to gain significant ground on the problem – which keeps overall power consumption at par or lower despite increasing densities, frequencies, and propensities for transistor leakage.  Moore’s Law lives on to fight another day.

The resulting just-announced products from JDSU are designed to evaluate 100GE systems.  The ONT 100G Module tests optical and electrical interfaces from the physical layer to Ethernet/IP, protocols, PCS layer, and transponders.  The JDSU Hydra measures stress sensitivity in 100GE systems, and the MAP-200 does multiplexing/de-multiplexing, signal conditioning, and signal access of 100GE optical signals.

If you’re designing 40G or 100G Ethernet, it’s nice to know that you’ll be able to test and measure your work with proven test equipment.  Beyond that, it is really nice to know that FPGAs are already proven in production to deliver the performance, features, power consumption, and density that you’ll need for your application.  It takes a bit of the fear off of “bleeding-edge” design work.

Leave a Reply

featured blogs
Aug 18, 2018
Once upon a time, the Santa Clara Valley was called the Valley of Heart'€™s Delight; the main industry was growing prunes; and there were orchards filled with apricot and cherry trees all over the place. Then in 1955, a future Nobel Prize winner named William Shockley moved...
Aug 17, 2018
Samtec’s growing portfolio of high-performance Silicon-to-Silicon'„¢ Applications Solutions answer the design challenges of routing 56 Gbps signals through a system. However, finding the ideal solution in a single-click probably is an obstacle. Samtec last updated the...
Aug 17, 2018
If you read my post Who Put the Silicon in Silicon Valley? then you know my conclusion: Let's go with Shockley. He invented the transistor, came here, hired a bunch of young PhDs, and sent them out (by accident, not design) to create the companies, that created the compa...
Aug 16, 2018
All of the little details were squared up when the check-plots came out for "final" review. Those same preliminary files were shared with the fab and assembly units and, of course, the vendors have c...
Jul 30, 2018
As discussed in part 1 of this blog post, each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store its configuration bits. The time required to pr...