feature article
Subscribe Now

When The Bottom Drops Out

Rising With Moore’s Law Leaves a Hole

It’s always exciting at the cutting edge. Here at EE Journal, we are always having fun learning about and bringing you news about the latest, greatest, biggest, fastest, coolest, most exotic accomplishments of our global engineering community. We love to surf the crest of Moore’s Law and gasp in amazement at the millions-> billions-> trillions-> of gates, LUTs, transistors, hertz, FLOPs, cycles, bytes, pins, users, dollars, and every other amazing metric that this dynamic industry seems to constantly generate.

With the Moore-driven juggernaut plowing exponentially through technologically tumultuous seas, it’s easy to focus on the bow and to forget what’s happening at the stern. There, the back of our boat leaves a gaping displacement hole – with turbulent wakes crashing and swirling in eddies of unintended consequence. All of us who design products to last more than a couple of years are painfully familiar with the supply-chain difficulties of obtaining the “old” parts we designed in so we can continue to produce and service our products. But, there are other consequences of the trailing edge of Moore’s Law as well.

Looking at the FPGA industry, we have chronicled a continual re-definition of what “FPGA” really means over the past two decades. FPGAs have gone from modest glue logic parts, designed in at the last minute to fix design holes in our systems, to sophisticated multi-million-gate systems-on-chip with uncanny capabilities from fixed, hardened, functional blocks. We have seen the “leading edge” FPGAs go from a handful of LUTs to millions of LUTs, and, with that shift, the game has changed substantially.

For the big players – Altera and Xilinx – that has always meant marketing at the top. The parade of press releases swapping superlatives about the world’s biggest this and the world’s fastest that have been a constant backdrop throughout the history of the programmable logic industry. The two have often tripped over themselves and each other in their efforts to win each leg of a frenetic race that almost nobody was watching. It’s quite clear that every person at those two companies knew who announced the next process node first. It’s not clear that anyone else noticed. For them, it was all about the next big, fast, amazing thing. The old, small, cheap stuff fell by the wayside. 

But, not everybody needs million-LUT FPGAs. There are still a lot of applications that can make very good use of a few hundred LUTs. For years, this vacuum in the wake of FPGAs was filled by CPLDs. FPGAs were the “big” devices, and CPLDs brought up the rear. However, we reached a point where CPLD technology was no longer the economical way to do the job. Companies started producing devices marketed as CPLDs that were, in fact, FPGAs. After awhile, the CPLD label mostly fell by the wayside and was replaced with designations like “low-density PLD” which translates into “really small FPGAs”.

After awhile, there was a gap between “FPGA” and “CPLD” that required the FPGA companies to re-think their marketing. It was too challenging to market products that cost over $1000 per chip and products that cost less than $2 per chip – all with the same name. Altera and Xilinx both launched “low cost” lines – “Cyclone” and “Spartan.” While these devices quickly became extremely popular, it isn’t clear how profitable they were for the two companies. Both companies seemed to struggle with their strategies and commitment to low cost FPGAs over the years. Both companies have skipped entire process nodes with their low-cost offering. Both companies have run hot and cold with their marketing emphasis on the smaller devices. 

If one does a quick back-of-the-envelope calculation, the questionable economics for the big companies becomes a little clearer. Consider the $1000 FPGA (which has typically been in the upper middle of the high-end families…) – If somebody orders 1000 units for a medium-small production run, that’s a million dollars of revenue. One could sit and recite applications and companies for hours that could use such a capability at that price. That’s how the FPGA companies have made their money for years. Now, consider the $1 low-cost FPGA. For that same million in revenue, one has to find an application where the production run is a million units. The number of customers and applications for a million-unit production run is probably substantially smaller than the number of 1000-unit high-cost runs. 

The “jump fast on the new process” method isn’t nearly as attractive for low-cost devices either. With high-end FPGAs, the new node brings more capability, more memory, faster logic and IOs, lower power consumption, and lower cost. It’s a big ‘ol barrel of goodness. For small FPGAs, the only real gain is in potentially lower unit cost. We say “potentially” because, early in the production of a new family, yields are typically low. It takes time and experience to gradually increase yield and decrease unit cost with a new, leading-edge semiconductor process. It also takes a lot of time and units to amortize the NRE involved in putting an FPGA family on a new process. The net result is that it probably takes quite awhile before an FPGA company sees actual lower costs for a low-cost family on a new node versus continued volume production on an old one. 

If one tracks the history of the offerings of the big two FPGA companies, one can see that there is a bit of a vacuum at the low end. With serious competition at the top and an almost myopic focus on their traditional competitor, both companies left the low-end mostly undefended. As high-end families have gotten larger, the mid-range and low-end families have essentially stepped up to take the place of “last year’s” parts – at a much lower cost. 

Several companies have stepped in to fill the vacuum left by the big two at the low end. Most notably, Lattice Semiconductor has built almost their entire strategy around beating Xilinx and Altera at the medium and low densities in programmable logic. Lattice has carefully analyzed the applications for lower-density devices and crafted their offerings to win specific high-value market segments. It is a strategy that has worked well for them, and it has brought their company back from the brink of failure to a respectable level of success.

Other players like Actel/Microsemi, QuickLogic, and SiliconBlue have attacked the low-cost vacuum as well. Actel altered their strategy significantly when they were acquired by Microsemi, QuickLogic went after a more application-specific strategy, and SiliconBlue – with their unbelievably tiny FPGAs – was acquired by Lattice. That turn of events and strategies has left Lattice pretty much alone with Xilinx and Altera’s lowest-cost offerings clearly in their gunsights, all while Xilinx and Altera are looking mostly the other direction, at the high end, at each other and at interesting competitors like Achronix and Tabula emerging for their flagship products.

It isn’t clear what’s next at the small end of the programmable logic spectrum. There are certainly technology needs that are uniquely served by these devices that don’t require any of the fancy features of today’s big FPGAs. There is still clearly competition in the arena, despite a lack of focus from the largest companies. And, there is a hungry and aggressive Lattice working hard to pick up the slack. It will be interesting to watch.

2 thoughts on “When The Bottom Drops Out”

  1. I think lattice is going with moore’s law. owning silicon blue’s 40nm fpga and their alliance with UMC on 28nm are parts of this. Another part is the planned ecp3 mini , a 28nm, $5 , 14klut device.

    Another thing lattice has got with the purchase of silicon blue is an exclusive license to use killopass’s one-time-programmable memory that’s fit for 40nm and 28nm.

    I wonder if they plan to release a 28nm OTP fpga+mcu device , priced competitively with MCU’s. that would be an interesting device.

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Nexperia Energy Harvesting Solutions
Sponsored by Mouser Electronics and Nexperia
Energy harvesting is a great way to ensure a sustainable future of electronics by eliminating batteries and e-waste. In this episode of Chalk Talk, Amelia Dalton and Rodrigo Mesquita from Nexperia explore the process of designing in energy harvesting and why Nexperia’s inductor-less PMICs are an energy harvesting game changer for wearable technology, sensor-based applications, and more!
May 9, 2023
40,192 views