feature article
Subscribe Now

When The Bottom Drops Out

Rising With Moore’s Law Leaves a Hole

It’s always exciting at the cutting edge. Here at EE Journal, we are always having fun learning about and bringing you news about the latest, greatest, biggest, fastest, coolest, most exotic accomplishments of our global engineering community. We love to surf the crest of Moore’s Law and gasp in amazement at the millions-> billions-> trillions-> of gates, LUTs, transistors, hertz, FLOPs, cycles, bytes, pins, users, dollars, and every other amazing metric that this dynamic industry seems to constantly generate.

With the Moore-driven juggernaut plowing exponentially through technologically tumultuous seas, it’s easy to focus on the bow and to forget what’s happening at the stern. There, the back of our boat leaves a gaping displacement hole – with turbulent wakes crashing and swirling in eddies of unintended consequence. All of us who design products to last more than a couple of years are painfully familiar with the supply-chain difficulties of obtaining the “old” parts we designed in so we can continue to produce and service our products. But, there are other consequences of the trailing edge of Moore’s Law as well.

Looking at the FPGA industry, we have chronicled a continual re-definition of what “FPGA” really means over the past two decades. FPGAs have gone from modest glue logic parts, designed in at the last minute to fix design holes in our systems, to sophisticated multi-million-gate systems-on-chip with uncanny capabilities from fixed, hardened, functional blocks. We have seen the “leading edge” FPGAs go from a handful of LUTs to millions of LUTs, and, with that shift, the game has changed substantially.

For the big players – Altera and Xilinx – that has always meant marketing at the top. The parade of press releases swapping superlatives about the world’s biggest this and the world’s fastest that have been a constant backdrop throughout the history of the programmable logic industry. The two have often tripped over themselves and each other in their efforts to win each leg of a frenetic race that almost nobody was watching. It’s quite clear that every person at those two companies knew who announced the next process node first. It’s not clear that anyone else noticed. For them, it was all about the next big, fast, amazing thing. The old, small, cheap stuff fell by the wayside. 

But, not everybody needs million-LUT FPGAs. There are still a lot of applications that can make very good use of a few hundred LUTs. For years, this vacuum in the wake of FPGAs was filled by CPLDs. FPGAs were the “big” devices, and CPLDs brought up the rear. However, we reached a point where CPLD technology was no longer the economical way to do the job. Companies started producing devices marketed as CPLDs that were, in fact, FPGAs. After awhile, the CPLD label mostly fell by the wayside and was replaced with designations like “low-density PLD” which translates into “really small FPGAs”.

After awhile, there was a gap between “FPGA” and “CPLD” that required the FPGA companies to re-think their marketing. It was too challenging to market products that cost over $1000 per chip and products that cost less than $2 per chip – all with the same name. Altera and Xilinx both launched “low cost” lines – “Cyclone” and “Spartan.” While these devices quickly became extremely popular, it isn’t clear how profitable they were for the two companies. Both companies seemed to struggle with their strategies and commitment to low cost FPGAs over the years. Both companies have skipped entire process nodes with their low-cost offering. Both companies have run hot and cold with their marketing emphasis on the smaller devices. 

If one does a quick back-of-the-envelope calculation, the questionable economics for the big companies becomes a little clearer. Consider the $1000 FPGA (which has typically been in the upper middle of the high-end families…) – If somebody orders 1000 units for a medium-small production run, that’s a million dollars of revenue. One could sit and recite applications and companies for hours that could use such a capability at that price. That’s how the FPGA companies have made their money for years. Now, consider the $1 low-cost FPGA. For that same million in revenue, one has to find an application where the production run is a million units. The number of customers and applications for a million-unit production run is probably substantially smaller than the number of 1000-unit high-cost runs. 

The “jump fast on the new process” method isn’t nearly as attractive for low-cost devices either. With high-end FPGAs, the new node brings more capability, more memory, faster logic and IOs, lower power consumption, and lower cost. It’s a big ‘ol barrel of goodness. For small FPGAs, the only real gain is in potentially lower unit cost. We say “potentially” because, early in the production of a new family, yields are typically low. It takes time and experience to gradually increase yield and decrease unit cost with a new, leading-edge semiconductor process. It also takes a lot of time and units to amortize the NRE involved in putting an FPGA family on a new process. The net result is that it probably takes quite awhile before an FPGA company sees actual lower costs for a low-cost family on a new node versus continued volume production on an old one. 

If one tracks the history of the offerings of the big two FPGA companies, one can see that there is a bit of a vacuum at the low end. With serious competition at the top and an almost myopic focus on their traditional competitor, both companies left the low-end mostly undefended. As high-end families have gotten larger, the mid-range and low-end families have essentially stepped up to take the place of “last year’s” parts – at a much lower cost. 

Several companies have stepped in to fill the vacuum left by the big two at the low end. Most notably, Lattice Semiconductor has built almost their entire strategy around beating Xilinx and Altera at the medium and low densities in programmable logic. Lattice has carefully analyzed the applications for lower-density devices and crafted their offerings to win specific high-value market segments. It is a strategy that has worked well for them, and it has brought their company back from the brink of failure to a respectable level of success.

Other players like Actel/Microsemi, QuickLogic, and SiliconBlue have attacked the low-cost vacuum as well. Actel altered their strategy significantly when they were acquired by Microsemi, QuickLogic went after a more application-specific strategy, and SiliconBlue – with their unbelievably tiny FPGAs – was acquired by Lattice. That turn of events and strategies has left Lattice pretty much alone with Xilinx and Altera’s lowest-cost offerings clearly in their gunsights, all while Xilinx and Altera are looking mostly the other direction, at the high end, at each other and at interesting competitors like Achronix and Tabula emerging for their flagship products.

It isn’t clear what’s next at the small end of the programmable logic spectrum. There are certainly technology needs that are uniquely served by these devices that don’t require any of the fancy features of today’s big FPGAs. There is still clearly competition in the arena, despite a lack of focus from the largest companies. And, there is a hungry and aggressive Lattice working hard to pick up the slack. It will be interesting to watch.

2 thoughts on “When The Bottom Drops Out”

  1. I think lattice is going with moore’s law. owning silicon blue’s 40nm fpga and their alliance with UMC on 28nm are parts of this. Another part is the planned ecp3 mini , a 28nm, $5 , 14klut device.

    Another thing lattice has got with the purchase of silicon blue is an exclusive license to use killopass’s one-time-programmable memory that’s fit for 40nm and 28nm.

    I wonder if they plan to release a 28nm OTP fpga+mcu device , priced competitively with MCU’s. that would be an interesting device.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

USB Power Delivery: Power for Portable (and Other) Products
Sponsored by Mouser Electronics and Bel
USB Type C power delivery was created to standardize medium and higher levels of power delivery but it also can support negotiations for multiple output voltage levels and is backward compatible with previous versions of USB. In this episode of Chalk Talk, Amelia Dalton and Bruce Rose from Bel/CUI Inc. explore the benefits of USB Type C power delivery, the specific communications protocol of USB Type C power delivery, and examine why USB Type C power supplies and connectors are the way of the future for consumer electronics.
Oct 2, 2023
35,911 views