feature article
Subscribe Now

Why Are There No Successful New FPGA Companies?

FPGAs are wonderful things – but you know that; otherwise you wouldn’t be reading this. This year it will be 25 years since Xilinx shipped its first product, the XC2064, creating the name Field Programmable Gate Array to distinguish the device from other programmable logic and to rank it alongside the gate array in use for building ASICs.

By 1990, there were around 20 companies attempting to play in this market, but already Xilinx and Altera were pulling ahead, with Lattice, Actel, Atmel, and QuickLogic snapping at their heels. Today, twenty years later, Xilinx and Altera between them dominate the market, QuickLogic is in the programmable devices field but distances itself from the FPGA arena, and Atmel appears to be using FPGA fabric only with its AVR processor family. Actel and Lattice have a fraction of the market of Altera and Xilinx (although dominating some niche markets, like Actel in Rad-Hard). Oh, and there are some start-ups, such as Silicon Blue, Achronix, and Tabula hovering in the wings.

In that time, around forty players have entered and left the FPGA stage. And this summer two more, Abound (M2000) and Tier Logic have made their exits.

John East is soon to retire from Actel, the company he has led for more than those twenty years. He has views on why it is so difficult to set up an FPGA business. (He also has some interesting, but currently un-publishable, war stories. Perhaps after his retirement we will be able to hear some of them.)

The discussion that follows is based in part on John’s views, with input from a range of other people and from my own nearly 25 years of being around the FPGA business. Any mistakes are, naturally, my fault. The things that are right are, equally naturally, thanks to other people.

John makes the point that any established high-tech market usually settles down to one or two leaders; attempts to break in directly are normally failures. Historically, divergent approaches are the way forward. So the main-frame market computer was dominated by IBM; the divergent approach was the mini-computer, requiring less space, less power, and initially dedicated to scientific processes, where the leader was DEC. The next divergent step was forked, with workstations, dominated by Sun, and the personal computer, dominated not this time by a manufacturer but by Microsoft software and Intel processors. Now ARM has come to dominate the next step in processors.

It seems too difficult to find such a divergent step in the FPGA business. Just making something bigger or faster doesn’t count as divergent.

The FPGA business is expensive to break into. You don’t just develop silicon, which is expensive enough, with mask sets alone running in the millions of dollars a throw. You need software for users to design and implement their devices, and you need applications and IP to be developed and tested. The first IP was listed in a book of macros; today, users want entire sub-systems that can drop into place. Selling an FPGA is a complex task, and users may require support from an army of FAEs, who, along with the sales guy, require specialist product training.

This is not a one-off cost. Once you start, you are on a treadmill. There have to be faster, bigger, lower-power products always under development. These in turn require improved software and increased IP. You have to sell your product, in competition with the established suppliers, at a margin that will allow you to stay on the treadmill and also make the company profitable enough for the investors to get their money back. There are rumours that a relatively recent start-up has so far attracted $200 million in funding. That company is going to have to sell a lot of FPGAs if its investors are ever to see their investment returned.

One path that some people have suggested might be worthwhile is to go for the consumer market. While margins are low, volumes are high, the silicon is smaller (more die per wafer, which means lower cost per die) and the software is simpler. Against this might be set the increasing requirement of consumer companies for total solutions – they just want to drop a chip, with the application, into a slot. They want the relevant software pre-written and pre-tested, which is not a cheap option. And consumer products notoriously have a very short product life. A product will sell in high volume for around six months, and then the next version comes along, possibly without your FPGA in it. This is in contrast to a market like telecommunications infrastructure, where a product may be manufactured for eight or more years.

We have referred to software. The software for designing and implementing FPGAs is horrendously complex, with many millions of lines of code. Place and route, for example, is a complex task, and if you want to optimise the design for speed, for example, or for power, it is even more complex. The existing suites have been built up over the last 25 years. While no one will say they are perfect, they are pretty impressive in what they can do. A new entrant is going to have to start from scratch, and often the first offering is limited to a simple place-and-route function and not a great deal else. Third-party tools have never made much penetration in this market. This is in part because the “free” tools from the established suppliers are at least adequate, and sometimes they are much more. (I have put “free” in quotes since a part of your silicon price is paying for the development and maintenance of the design tools.) But the most important issue is that a third-party tools supplier needs detailed information on the technology of the target device. Without very close cooperation from the hardware people (and why should they provide this?), the tool is never going to be better than good.

Which brings us to another barrier – you people. If you have spent the last few years working with company A’s or company X’s products and tools, you have a lot invested, and you pretty much understand how the tools work. When a salesman from a new company explains how his new FPGA is going to be bigger or faster or lower power, or has all these neat features, like a gazillion SerDes channels, you are thinking, “This is all very well. But the established guys will be there in a matter of months. And these advantages are not so great that I want to throw away the fruits of late nights and weekends fighting with the design software to start all over to learn a whole new approach.” Now, to the salesman, this sounds like a stick-in-the-mud approach, but it is a rational approach, particularly if, in the past, you have been burned by underperforming software or by hardware that doesn’t actually do all the things the salesman said it would. After all, no one got sacked for buying IBM main-frame computers.

OK – we have looked at the barriers for entry, and they are clearly significant for start-ups. Why did big, established, wealthy companies also fail to make it? The reasons are multiple, but one common thread is that they were big, established, wealthy companies. A salesman – sorry to bring them back in again, but they are a fact of life – might be faced with the choice of selling memory or FPGAs. Memories are relatively simple to sell (I carefully said “relatively” simple), the volumes are high, the margins are clear, and the support is minimal. FPGAs are quite hard to sell, they are not sold in huge volumes even now, and twenty years ago were only ever small-volume purchases, and they require all sorts of support tools.

You are a rational human with a lifestyle to support – what route do you choose?

The same arguments go for other activities within the company. Production management looking at the fab loadings has to choose between volume and medium margin or smaller volumes and potentially higher margins. (Dedicated FPGA companies have always been fabless, so this doesn’t arise.) The product development team has to choose between going with “simple” silicon or products that will mean recruiting more software guys and applications developers. Applications engineering has to decide on where to allocate staff. Every one of these decisions is hard, and in all these cases, even with a strong and separate FPGA team, the FPGA is going to have to fight hard for its place in the sun. Sooner or later, the large companies decided to stick to their core competencies, and they sold off or closed down their FPGA operations.

So breaking into FPGAs is complex, expensive, and hard. Does that mean no one is going to do it? The jury is out. What may happen is that someone will come out of nowhere with a hot new product that turns the industry on its head again. But is the industry now too set in its ways to do handstands?

8 thoughts on “Why Are There No Successful New FPGA Companies?”

  1. Pingback: 123movies
  2. Pingback: Dungeon
  3. Pingback: binaural
  4. Pingback: videos
  5. Pingback: AME
  6. Pingback: chimney repair

Leave a Reply

featured blogs
Oct 9, 2024
Have you ever noticed that dogs tend to circle around a few times before they eventually take a weight off their minds?...

featured chalk talk

Pairing Gate Drive to EliteSiC
Sponsored by Mouser Electronics and onsemi
In this episode of Chalk Talk, Amelia Dalton and Bob Card from onsemi investigate the role that gate drivers and switches play in high-power mega trend applications. They also explore the benefits that silicon carbide switches bring to these applications, how gate drive is calculated and how onsemi is furthering innovation in wide band gap drivers and switches.
Sep 30, 2024
9,105 views