feature article
Subscribe Now

Pixel Panorama

FPGAs Enable Video Revolution

When we got 1080p60, some people thought we were done.  Those big, beautiful high-definition images were leaps and bounds better than the fuzzy CRT pictures we grew up with – even for those of us with big-budget old-school AV technology.  My 32-inch Sony CRT weighed about ten-thousand pounds and cost about ten-thousand pounds – OK, not really. And for all that, I got a picture that was about the quality of the “low-res” standard-definition YouTube images that draw the ire of today’s kids – but only if I used my LaserDisc player to get “videophile quality.” 

Now that we have 1080p, it’s all good, right?  I mean, we all have parents that still watch standard definition cable by accident and don’t notice the difference, don’t we? 

We’re not done, actually.  Not by a long shot.  1080p60 is already old, with people demanding 120 and 240 Hz refresh rates.  Based on this week’s CES – video game consoles and content that don’t support 3D are not long for this world, and the general public will not put up with those goggles for long.  Top that off with the 4K 2K movement and you get a bandwidth multiplier that is absolutely crazy.  Looking at the raw data rate requirement, we can multiply by four to get the refresh rate up to 240, multiply that by 2 to get 3D, multiply that by the number of views we’d need to render to get glasses-free 3D, and throw in another 4x multiplier or so for more resolution. 

Why do people want all this?  Some parts are not clear.  4K by 2K, for example, is said to be driven by the needs of feature films for theater display.  However, if you look at the layout of a typical theater, it’s doubtful that the average viewer (at least one sitting any farther back than the first few rows) could tell the difference in resolution brought by 4K2K.  It seems our eyes are just not up to the task – if we’re viewing from more than about one screen height away from the display.  Combine that with the fact that studios are still stuck in the land of 24 Hz refresh rates (at 1080p24), and one wonders if it might be more important to have the Batmobile move less than 20 feet down the street in each frame – rather than more accurately rendering the legs on the bugs splattered on the windshield.

Despite the incongruity of the movie industry – more immersive video experiences will definitely require some subset of the crazy amounts of data we were calculating above.  Compounding the problem is that all that data doesn’t just come straight to our home on a fiber.  That would be WAY too easy.  Right now, for example, using products I have around the house – I can remotely set my cable box on an HD channel using my Slingbox and direct that program to my iPhone via the Slingbox app.  So, the signal is coming into my house via the cable box, and going back out through my broadband connection, and coming back in via that same broadband connection, then going to my iPhone over my wifi network.  Is that enough?  Oh no.  Now I don’t want to watch this program on my tiny iPhone screen (who does?), so I choose to stream it to my plasma monitor via Apple TV from my iPhone using Apple’s AirPlay.  Can you say “one more time through the wifi router?”  I knew you could.  Now, whatever program I’m watching is simultaneously going through three separate streams on my wifi router, twice through my broadband connection, and once from the source at the cable company.  While this kind of setup would make Rube Goldberg blush, consumers will do really stupid stuff with their electronics gear.  It still needs to work.  Sell me some bandwidth.

The two major trends that seem unavoidable no matter what direction the industry takes are 1) a massive increase in bandwidth is required to deliver the video content people will be demanding.  2) The standards for that delivery will be in flux for a long time to come.

Enter FPGAs.

FPGAs are the perfect match for this problem.  The enormous amounts of data that need to be slung around, crunched, re-ordered, re-formatted, and otherwise handled are far beyond what we can get from any reasonably priced conventional processor.  Furthermore, given the enormous cost of developing custom chips these days and the incredible state of flux surrounding these video devices, it’s unlikely that many ASSP developers will have the guts and resources to build specialized chip-sets for anything but the highest-volume, lowest-risk applications.

FPGAs, on the other hand, can handle most of the problems associated with the creation, compression, transport, decompression, and delivery of modern video content – and often with (relatively) low-cost parts.  Manufacturers can also build equipment based on in-flux standards with the confidence that they can adapt to changes by simply re-configuring FPGAs in the field. 

The biggest barriers to FPGA adoption for video applications are – engineers.  It turns out that those of us who spent our careers learning the intricacies of video didn’t always also spend our careers learning the vagaries of FPGA design.  Many companies put video experts and FPGA experts on the same team and hope for the best.  Others send video people for FPGA training, or FPGA people for video training.  None of these solutions are optimal. 

FPGA companies are working to solve this issue on several fronts.  First, there is the slow, persistent drumbeat of “always-slowly-working to make FPGAs easier to use.”  Year-to-year, the tools get better, the kits get more complete, and the bar is lowered for getting good results doing FPGA design.  Second, however, is a more powerful trend.  This week at CES, Xilinx announced the second generation of their Consumer Video Kit.  This kit, part of Xilinx’s “Targeted Design Platforms” initiative, is quite a bit more than just a development board.  For applications like video, in particular, we need a number of specialized hardware connections and an extensive library of domain-specific IP.   In addition, Xilinx includes validated reference designs that get us up and running toward our goal without any design work at all. 

The Xilinx kit – developed along with Tokyo Electron Device, Ltd. – is based on a Spartan-6 LX150T “base board” with FPGA Mezzanine Connector (FMC).  There are now 3 FMC connectors on the board, a larger (900 pin) FPGA, and more memory (DDR SDRAM x3).  The kit includes 1.05 Gbps LVDS, DisplayPort 1.1a (Tx/Rx), V-by-One HS, and HDMI 1.4a.  New are USB 3.0 and SATA Gen 2 support.  It also includes evaluation versions of DisplayPort LogiCORE IP Core and the V-by-One HS IP Core.

Altera also offers specialized audio and video development kits – based on their Cyclone and Stratix IV devices.  Altera’s kits utilize their HSMC (high-speed mezzanine card) rather than Xilinx’s FMC, along with a comparable array of IP and reference designs. 

Specialized FPGA kits like these are likely to spread to many more domains as FPGAs spread into more application domains.  Because of the tremendous activity and enormous market potential in video right now, video kits were at the top of the list for FPGA vendors wooing new customers.  If your application is more specialized, it may be awhile longer before you get anything with as much ready-to-eat, shrink-wrapped productivity as the video folks are getting. 

Meanwhile, those of you that still have the old video gear in your basement, go down and power it up one last time.  Some of those quaint analog artifacts that we all grew up with are about to disappear from the world altogether, and, within a generation, it’s likely people will never know they even existed. 

Leave a Reply

featured blogs
Jun 6, 2023
Learn about our PVT Monitor IP, a key component of our SLM chip monitoring solutions, which successfully taped out on TSMC's N5 and N3E processes. The post Synopsys Tapes Out SLM PVT Monitor IP on TSMC N5 and N3E Processes appeared first on New Horizons for Chip Design....
Jun 6, 2023
At this year's DesignCon, Meta held a session on '˜PowerTree-Based PDN Analysis, Correlation, and Signoff for MR/AR Systems.' Presented by Kundan Chand and Grace Yu from Meta, they talked about power integrity (PI) analysis using Sigrity Aurora and Power Integrity tools such...
Jun 2, 2023
I just heard something that really gave me pause for thought -- the fact that everyone experiences two forms of death (given a choice, I'd rather not experience even one)....

featured video

The Role of Artificial Intelligence and Machine Learning in Electronic Design

Sponsored by Cadence Design Systems

In this video, we talk to Paul Cunningham, Senior VP and GM at Cadence, about the transformative role of artificial intelligence and machine learning (AI/ML) in electronic designs. We discuss the transformative period we are experiencing with AI and ML and how Cadence is revolutionizing how we design and verify chips through “computationalizing intuition” and building intuitive systems that learn and adapt to the world around them. With human lives at stake, reliability, and safety are paramount.

Learn More

featured paper

EC Solver Tech Brief

Sponsored by Cadence Design Systems

The Cadence® Celsius™ EC Solver supports electronics system designers in managing the most challenging thermal/electronic cooling problems quickly and accurately. By utilizing a powerful computational engine and meshing technology, designers can model and analyze the fluid flow and heat transfer of even the most complex electronic system and ensure the electronic cooling system is reliable.

Click to read more

featured chalk talk

Optimize Performance: RF Solutions from PCB to Antenna
RF is a ubiquitous design element found in a large variety of electronic designs today. In this episode of Chalk Talk, Amelia Dalton and Rahul Rajan from Amphenol RF discuss how you can optimize your RF performance through each step of the signal chain. They examine how you can utilize Amphenol’s RF wide range of connectors including solutions for PCBs, board to board RF connectivity, board to panel and more!
May 25, 2023
1,750 views