feature article
Subscribe Now

Taming a Tenth of a Terabit

FPGAs in 100GbE Tester

When the bandwidth glut funnels its way from the bundle of “last-miles” into the big aggregators, switching our packets with ever-increasing density – we inevitably reach the point of the Big Pipe. The Big Pipe is always the limit of our technology – the most bits we can cram into a single cable so we can run them across the floor to another machine.

Every couple of years, the size of the Big Pipe increases – through some clever convolution of Moore’s Law.  Today, we sit on the threshold of 100 billion bits per second.  In Ethernet.  Now, we could reminisce about the days when some of us thought Ethernet had run its course – that we were about to approach the limit of what was possible with the venerable bundle of wires and fibers that 802.3 has brought us since the mid 1970s.

In one of the world’s most ironic chicken-and-egg games, you have to be able to test a standard before you can build it, and testing to a standard inevitably is a much more daunting design challenge than just building the thing in the first place.  Getting the performance and flexibility required to create test equipment for emerging standards like 100GbE is the classic design target for the very highest-end FPGAs.  When FPGA companies want to squeeze the last possible modicum of performance, density, and power efficiency out of the most cutting-edge process node, they inevitably are envisioning test equipment for the next Big Pipe.

When Altera first announced their 40nm Stratix IV FPGAs – long before they hit volume production, their main bragging points were — you guessed it — how well the new devices would hold up in 100GbE and 40GbE applications.  Every aspect of the new devices was defined in terms of these applications – how many transceivers would it take for various tasks, what power consumption was expected with the higher densities and frequencies involved, how much fabric and memory would these applications consume?  If the family could forge a new notch on the bench for these rare-air applications, everything else was demonstrably easy.

JDSU decided to take up Altera’s challenge.  JDSU’s just-announced 100-Gigabit Ethernet Test Suite puts some of the industry’s biggest, baddest FPGAs in the hot socket, seeing how they measure up against the Next Big Pipe.  The answer, apparently, is that the FPGAs did quite well.  JDSU designed in the Altera Stratix IV GT – a 40nm FPGA with built-in 11.3 Gbps SerDes transceivers.  The devices boast over 500K 4-input LUT equivalents (delivered as 200K Adaptive Logic Modules – essentially an 8-input fracturable LUT), 20 Mb of embedded memory, over 1K 18×18 hard-wired multipliers, and the centerpiece (which is ironically at the edge) – a collection of up to 48 multi-gigabit SerDes transceivers, 24 of which are 11.3 Gpbs capable.

JDSU didn’t know about Stratix IV when they started this project 3 years ago.  “We started about three years ago investigating next-generation network requirements – 40Gb and 100Gb,” says Johannes Becker, Marketing Director at JDSU.  “We collected feedback and requirements from customers and component suppliers.  Nobody knew which direction 100Gb would be going.  10X10Gb? 4X25Gb?”  Now, 100Gb is implemented with 4 lanes of 25Gb optics and 40Gb with 4 lanes of 10Gb optics.  

At the time, however, JDSU needed flexibility to begin designing equipment, even when the standard was up in the air.  They designed a daughterboard for transponders and worked closely with component suppliers like Altera to be sure the technology would be in place to support their design structure.  “We wanted to future-proof the solution,” explains Dietmar Tandler, R&D Director at JDSU.  “We learned a lot from our 40G project.  We designed the system so we could exchange only the transponders without altering the rest of the hardware.”

The “partnering” part of that is of significant value to FPGA companies.  The days when you could just add a bunch more LUTs, add 20% to the pin count, bump the SerDes up a notch, and announce a new family are long gone.  These days, FPGA companies have to engineer general-purpose parts for specific purposes.  They look at what they believe are key applications that need to be served by their next-generation family and then work with “friendly” design teams to be sure their newly-designed chips will have the features, capacity, and performance to handle those killer apps.  When the Venn diagram for these are all unionized, we hopefully get an FPGA that is capable of much more than the sum of its target applications – we get a general purpose part that really sings on the main objectives and can be flexed and stretched to accommodate a much wider gamut of problems. 

Given the enormous performance, power, flexibility, and functionality challenges, what made the design team lose sleep at night?  “Density was the big deal,” continues Tandler.  “We needed to avoid high-speed multiplexers.  We used large amounts of internal RAM on the FPGA, all of the capability of the high-speed serial, and we needed enough logic left over to complete the functionality.”  

For many of us, signal integrity would be the big fear in such a project – getting that many multi-gigabit transceivers to behave over a wide range of conditions at those frequencies sounds like it could end up being one of those “squeeze the balloon” problems where fixing one area just makes a new problem appear somewhere else.  “That part went very smoothly,” Tandler replies.  “We probably only spent about 2 weeks on SI tuning.”

Power, although a huge challenge for the FPGA company, didn’t really show up on radar for JDSU.  This is a major feather in the cap of the FPGA, because putting that much hardware, operating at these frequencies, all on one chip at 40nm and having power not be the prohibitive concern speaks to a good bit of engineering in the devices and in the design tools that support them.  With each new FPGA process node, we predict that power will be a major concern, and each time the FPGA companies manage to gain significant ground on the problem – which keeps overall power consumption at par or lower despite increasing densities, frequencies, and propensities for transistor leakage.  Moore’s Law lives on to fight another day.

The resulting just-announced products from JDSU are designed to evaluate 100GE systems.  The ONT 100G Module tests optical and electrical interfaces from the physical layer to Ethernet/IP, protocols, PCS layer, and transponders.  The JDSU Hydra measures stress sensitivity in 100GE systems, and the MAP-200 does multiplexing/de-multiplexing, signal conditioning, and signal access of 100GE optical signals.

If you’re designing 40G or 100G Ethernet, it’s nice to know that you’ll be able to test and measure your work with proven test equipment.  Beyond that, it is really nice to know that FPGAs are already proven in production to deliver the performance, features, power consumption, and density that you’ll need for your application.  It takes a bit of the fear off of “bleeding-edge” design work.

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Portenta C33
Sponsored by Mouser Electronics and Arduino and Renesas
In this episode of Chalk Talk, Marta Barbero from Arduino, Robert Nolf from Renesas, and Amelia Dalton explore how the Portenta C33 module can help you develop cost-effective, real-time applications. They also examine how the Arduino ecosystem supports innovation throughout the development lifecycle and the benefits that the RA6M5 microcontroller from Renesas brings to this solution.  
Nov 8, 2023
21,714 views