Life is simple in a duopoly.
They say that nothing unites people more than a common enemy. If you want everyone in your company to be motivated and working toward one common goal, nothing is more powerful than a single, identifiable competitor at whom you can unleash the full force of your company’s competitive fury. The singularity of vision required for good teamwork is supplied for you – almost by magic. People do not have to believe in some abstract vision of the future handed down from executive management of dubious origin and questionable motives. Rather, they can identify for themselves the goals and objectives in a common-sense plan for taking down the adversary.
Xilinx and Altera have enjoyed this luxury for years. They have motivated and managed each other’s employees – simply by existing. They have pushed the state of the art in FPGAs in a tit-for-tat cross-town feud that has reverberated in the press, the supply chain, the ecosystem partner-sphere, and the customer base.
During these decades, FPGAs have surfed the swells of Moore’s Law – racing down the face of the exponential capability curve even faster than the underlying semiconductor technology itself. Neither vendor could afford to allow the other to jump on a new process node first. The resulting competitive advantage would swing the delicate balance of power and market share. Both companies pushed hard and invested heavily in internal tool development – searching for any competitive advantage that would give them a leg up on their rivals.
If you were trying to get a project going at either company during the peak of their competitive fury, often the only justification required was that THEY were rumored to be working on it already. Funding. Approved.
The breakneck pace of the two jousting horsemen of the programmable logic market served as formidable barriers to entry for any who dared attempt a market share grab in FPGAs. Frontal attacks were dismissed forthwith by the insurmountable breadth and formidable war chests of the big two. The only available strategic option for would-be interlopers was the flanking move – find a niche where the big guys were not focused and exploit their lack of attention to carve out a profitable market sub-segment.
However, while the duopoly is a self-preserving, stabilizing market force, it produces a less-explored phenomenon as well. It can cause a lack of progress in areas where neither competitor has chosen to focus. If your place-and-route software is as fast as the other guy’s – no need to spend a lot of energy speeding it up. If your margins are fantastic and the other seller is at about the same place – you don’t want to go messing up the game by cutting prices. If improving something in your product or service offering doesn’t differentiate you against your one competitor – there is really no need to spend time, energy, and money on it.
The battle has raged on for years, with the vortex of the conflict centered on a prototypical FPGA socket – the network box. Network boxes have always needed one thing – bandwidth. FPGAs provided that bandwidth – by massively parallelizing tasks like packet switching and by blasting bits at a blinding rate through ever-faster IO contrivances. The bandwidth market needed more LUTs, higher Fmax, more Gbps, more multipliers, and more memory. They needed all that with the least power possible. Try to keep the price down if you can.
Being public companies, however, the FPGA vendors are beholden to an impatient, uninformed, and unforgiving audience – their shareholders. Shareholders want to see double-digit or better growth potential, or they pack up and pull their retirement portfolios over to another party – like Facebook. If you want to maintain the image that your company is perpetually poised for exponential growth, simply maintaining a tug-of-war for incremental market share points in a relatively static sub-market isn’t going to cut it. You need to be expanding into green fields – boldly going where no programmable logic has gone before.
FPGA companies answered the challenge by branching out in multiple directions. They made low-cost devices, industrial strength devices, rad-hard devices, mid-range devices, and – most recently – systems on chip. If FPGAs are all about programmable hardware, then it only makes sense to adorn them with the other forms of programmability as well. Processor cores were rolled into the mix – both soft-core and hard-core varieties. Tool suites were enhanced to include embedded software development capabilities. The era of the FPGA-based system-on-chip was born. And born again. And born again. Fizzle.
For quite some time, nobody really took FPGA-based embedded processing seriously. After all, the processors worked at glacial speeds – an order of magnitude or more off the pace of “real” processors. Mostly, the FPGA-based processors came in one variety, compared with “real” processors whose catalog was filled with hundreds or even thousands of variants. FPGA processors used funny offbeat architectures like “Nios” and “Microblaze,” which, to most software developers, seemed pretty suspicious.
Recently, though, FPGA companies decided to up their game. They got tired of being the DeLoreans of the embedded processing market. They wanted to have fast, efficient, common-architecture processing subsystems on their FPGAs.
They gave a call to ARM.
Now, we face a new generation of programmable devices – with high-end embedded processing subsystems combined with powerful programmable logic fabric. For those of us who follow FPGAs, this is a whole new animal. It brings up programmable possibilities that we only began to dream of before.
At the same time, it brings up something the FPGA companies haven’t dreamed of for awhile either. New competitors.
There are, it seems, already quite a number of companies making ARM-based devices with impressive lists of capabilities. We all know who they are. Take a gander at their catalogs and you’ll find an impressive array of features mated with every conceivable configuration of ARM architecture. From tiny acorns to mighty oaks, these companies crank out ARM-based chips by the bazillions, and – for most of them – it isn’t their first rodeo.
FPGA-based SoCs are counting on a single differentiating feature when they ride into battle against the Freescales, NXPs, TIs, Renesases and countless others: programmable logic fabric. Anything else the FPGA companies put on their SoCs has been done, re-done, and done better by some combination of competitors in this new arena. The FPGA companies’ puny processor catalogs will wilt like spinach in a saucepan against even the weakest of the big processor-packing titans. Their entire strategy rests on the hope that programmable logic fabric will give them a silver bullet to use against the vast experience base of the established competitors in this new, larger market.
Programmable logic is a handy super power. Instead of filling up a catalog with every conceivable combination of peripherals, you can offer a single device where unlimited peripheral configurations can be added as IP blocks. You can offload and accelerate demanding processing tasks – particularly DSP and other embedded supercomputing operations – better than with any other embedded solution. You can partition your software tasks partially into hardware with enormous power savings. You can integrate other functions off of your PCB right onto the same chip – with potentially huge benefits. It sounds like a compelling story if you need any of those things. Otherwise, well, a plain-old $5 non-FPGA SoC might serve you just as well.
If programmable fabric does prove to be a key differentiator – and proves that in a broad enough sub-segment of the market to let the former FPGA companies get traction, they stand to win big. If, on the other hand, programmable fabric becomes the spinny-rims on the black SUVs of the ARM-based fleet, the exercise in FPGA market expansion may be a catastrophic failure.
In any event, this new market is most certainly NOT a duopoly. FPGA companies will have to learn to play by different rules and against new, more diverse, and sometimes more agile competitors.
Let the games begin!
4 thoughts on “Who’s the Competition Now?”
FPGA companies are entering some new markets with some ferocious new competitors. In the near future, it won’t be good enough to simply beat the guys across town. Are the FPGA folks ready for this scary new jungle?
Used to be when some new function found success on FPGAs, somebody would tape it out as an ASIC at 1/10 the price and/or 10x the speed, and that was that for the FPGA version.
Now that a cutting-edge ASIC costs $20-30M+ (counting logical and physical design, IP, verification, EDA tools, etc.) new things stay on FPGAs much longer.
This new generation of SoC FPGAs is hot, now that A-class ARM cores are powerful, well-supported and everywhere. But when a new function succeeds in the FPGA half, will mainstream SoC vendors find it easy to throw the new function into the next tapeout?
One of the key points will be the tool chain. Xilinx have already made the first move with a tool chain for Xynq that is a fraction of the cost of an SoC tool chain from an EDA major.
The problem is that the FPGA tool chains are optimized for someone that is thinking about building ASIC’s.
It’s not optimized for nearly automatic, fast incremental place and route … it fact it can literally take weeks to reach timing closure for a large design.
That’s a problem that just isn’t going to go away in the near term. It would have been fixed by now, if Xilinx (and other FPGA vendors) had backed off the draconian IP rights preventing open source tool chain develpment to make system level SOC tools down to place and route that are developer friendly in terms of speed and incremental placements.
When I suggested that back some 6 years ago, that we needed these tool problems fixed, and open source was the most likely solution … let’s just say the Xilinx staffers, and a number of hard core Verilog/VHDL designers were less than friendly about using high level abstraction tools that are rapidly becoming common place today. Bit twiddling in Verilog/VHDL is soon likely to be as welcome as bit twiddling in assembly language is on large software projects.
Open source in the hardware markets, is still considered a joke, … I would have thought so 20 years ago when as system vendors we prided ourselves on having the best C compilers as a market edge for our platforms. Not that many years later, GCC is a significantly better tool chain than any single architecture vendor could dream of developing … much less supporting with the budgeted staff dollars available.
It really is way past the time to tell FPGA vendors to back off the strict IP strangle hold that prevents open source FPGA tool chains all the way down to place and route tools, so we can start fixing this problem. And let the market, and the developers start developing the tools needed to make the fully integrated SOC on an FPGA a reality.
There is a lot more profit in the silicon … follow the system vendors and learn from their mistakes. Push your tool chain support budget into leveraged open source tool chains down to place and route … and share the gains with other vendors also following that lead and sharing development of a common tool chain.
Linux, GNU(gcc), Xorg, OpenOffice, KDE, and a number of other open source projects are directly staffed by paid developers on large corporate payrolls. Plus a few hundred thousand non-paid private individuals.
That’s far more productive than what a few dozen developers at each FPGA vendor can do on separate incompatable projects.