feature article
Subscribe Now

Flex Logix Fires Second Salvo

Challenging FPGAs on AI Applications

For decades now, FPGA companies have struggled to overcome their de-facto positioning as “ASIC alternatives.” Of course, FPGAs are great for prototyping your design, or for getting something into production much earlier than we could with a custom chip design. But, eventually, for designs that go into volume production, there comes a time when it’s worth designing an ASIC or ASSP to do the same thing, yielding better performance, lower power consumption, smaller area, and lower unit cost. This is bad for FPGA companies because just when a design win should turn into higher volume and long-term revenue, the FPGA gets dropped off the board and replaced with a custom device.

For FPGA providers, the solution for this sad situation involves finding applications where reprogrammability itself is a core requirement. Software-defined radio, for example, is such a “killer app.” The application requires programmable fabric so modems can be created, loaded, and dispatched on the fly. It doesn’t make sense to replace FPGA-based logic with hardened gates, because in-system reprogrammability is a fundamental part of the application. The result? The FPGA company gets to keep the socket even when production volume rises, without fear of being yanked in favor of a custom chip.

Recently, neural network inferencing has emerged as another of these FPGA “killer apps.” FPGA LUT fabric (along with the fixed-point DSP resources) delivers spectacular performance/power characteristics on neural network inferencing, and in-system reprogrammability is a must. On top of that, there is an enormous number of applications that can take advantage of AI/neural network technology. For FPGA companies, it could be the proverbial bird’s nest on the ground – a high-volume, high-value key role in a wide variety of new applications. You can almost hear the champagne corks popping at FPGA headquarters…

“Not quite so fast, there, cowboys.” says Flex Logix.

As we’ve discussed before, Flex Logix provides IP that allows designers to put FPGA LUT fabric on custom chips. Recently, they announced that their latest-generation EFLX cores allow embedded FPGA arrays up to 122.5K LUTs to be built on TSMC 16FF+ and 16FFC processes. This means that you can bring the benefits of FPGA-like reprogrammability to custom chips for applications such as neural network inferencing, for example. Flex Logix fabric comes in 2.5K LUT blocks, which can be arrayed to build the desired size – from 2.5K up to 122.5K.

What does this mean for the FPGA companies looking for that long-term socket? It means that programmability is no longer a competitive moat against ASIC/ASSP incursion. Design teams can build custom chips with all the benefits of reprogrammability rather than committing to off-the-shelf FPGAs for long-term, high-volume production. This doesn’t cut conventional FPGAs out of the picture, but it does put a damper on their aspirations to become the long-term, unchallenged solution for applications that want to cost-reduce for high volumes.

This is the second generation of the Flex Logix IP cores. These blocks are based on 6-input LUTs, which can also be configured as dual 5-input LUTs. These are similar to the logic cells used by mainstream FPGAs. Each block can be either a “logic” block or a “DSP” block, where the DSP version replaces some LUTs with 40 22x22bit MACs. Each IP block uses CMOS I/Os to talk to the rest of the chip, so you gain considerable bandwidth and power efficiency compared with using a separate, stand-alone FPGA along with your custom device. Flex Logix uses a proprietary, high-density routing architecture, which they claim has been further improved with this second generation, giving a very respectable 1.0 mm2 footprint with only six routing layers for a 2.5K LUT block on TSMC 16FF+/16FFC.

Flex Logix also says the new architecture has further improved on its novel interconnect to give higher performance for larger arrays. They have also structured the MAC blocks to be pipelined 10 in a row, which allows local interconnect to be used for highly chained datapaths such as FIR filters, thus improving performance and reducing the requirement for external routing resources. A new test mode has been added for faster test times, as well as additional miscellaneous DFT enhancements. For high-reliability (particularly aerospace) designs, a “readback” feature has been added that allows the configuration to be checked and scrubbed periodically in case radiation-induced single-event upsets or other environmental “soft” errors have damaged the configuration, allowing a quick reconfigure when an error is detected.

One of the key barriers to embedded FPGAs has always been design tools for the FPGA fabric. While plopping down a bunch of LUTs is a fairly straightforward task, providing and supporting the complex set of design tools for synthesis, place-and-route, and bitstream generation is a much more demanding undertaking. The Flex Logix EFLX compiler addresses this need, and it is available for a no-cost evaluation. Speaking of evaluation, the new cores are being fabricated now, and evaluation boards will be available under NDA to customers. Flex Logix proves all new cores in silicon themselves to assure that customer experience with integration is smooth.

The key question is, will Flex Logix get traction with the new cores in strategic applications like AI? They’re off to a good start. This week, the company announced that its embedded array is part of a next-generation deep learning chip being developed at Harvard by the research group of Professors David Brooks and Gu-Yeon Wei at Harvard’s John A. Paulson School of Engineering and Applied Sciences. The device has already gone to tape-out and is going into fabrication – giving an early view into the practicality of EFLX integration for AI applications.

Flex Logix is only a couple years old, but the company has already taken the embedded FPGA idea farther than any previous attempt we are aware of. Their tile-based structure gives chip designers a lot of flexibility in balancing the FPGA fabric with the other resources on the chip for their particular application. We have not yet used or talked to customers who use the tool suite, which is likely the critical make-or-break factor for widespread success of the technology. It will be interesting to watch.

Leave a Reply

featured blogs
Jan 21, 2022
Here are a few teasers for what you'll find in this week's round-up of CFD news and notes. How AI can be trained to identify more objects than are in its learning dataset. Will GPUs really... [[ Click on the title to access the full blog on the Cadence Community si...
Jan 20, 2022
High performance computing continues to expand & evolve; our team shares their 2022 HPC predictions including new HPC applications and processor architectures. The post The Future of High-Performance Computing (HPC): Key Predictions for 2022 appeared first on From Silico...
Jan 20, 2022
As Josh Wardle famously said about his creation: "It's not trying to do anything shady with your data or your eyeballs ... It's just a game that's fun.'...

featured video

AI SoC Chats: Understanding Compute Needs for AI SoCs

Sponsored by Synopsys

Will your next system require high performance AI? Learn what the latest systems are using for computation, including AI math, floating point and dot product hardware, and processor IP.

Click here for more information about DesignWare IP for Amazing AI

featured paper

Using the MAX66242 Mobile Application, the Basics

Sponsored by Analog Devices

This application note describes the basics of the near-field communication (NFC)/radio frequency identification (RFID) MAX66242EVKIT board and an application utilizing the NFC capabilities of iOS and Android® based mobile devices to exercise board functionality. It then demonstrates how the application enables the user with the ability to use the memory and secure features of the MAX66242. It also shows how to use the MAX66242 with an onboard I2C temperature sensor which demonstrates the energy harvesting feature of the device.

Click to read more

featured chalk talk

Power over Ethernet - Yesterday, Today, and Tomorrow

Sponsored by Mouser Electronics and Microchip

Power over Ethernet has come a long way since its initial creation way back in 1997. In this episode of Chalk Talk, Amelia Dalton chats with Alan Jay Zwiren from Microchip about the past, present, and future of power over ethernet including details of how a PoE system works, why midspans are crucial for power over ethernet connectivity and why Microchip can be your one stop shop for your next PoE design needs.

Click here for more information about Microchip Technology multi-Power over Ethernet (mPoE)