feature article
Subscribe Now

After Intel and Altera

What Happens to FPGA?

For decades, the FPGA market has been a well-balanced duopoly. Something like 80% of sales have been split by two ferocious competitors, Xilinx and Altera, constantly jousting for single points of relative market share. This dynamic has driven everything from the FPGA technology itself to the tools, IP, and services that make the whole concept work. It has determined what we pay for FPGAs, what they can do, and how we use them. 

Now, Intel plans to buy Altera, and the duopoly that has dominated the FPGA universe will come to an end. What happens next? Will the Earth shift on its axis? Will the “FPGA market” cease to exist? What will be the long-term implications of this business change on the future direction of this critically important technology?

It is easy to find pundits with strong opinions on opposite sides of this question. Some say that Altera will fade into obscurity leaving Xilinx to completely rule the roost. Others say that the Intel/Altera combo will be an unstoppable megaforce that will obliterate Xilinx. Then there are the middle-of-the-road-ers who think that nothing much will change, Altera chips will get their silkscreening revamped, and it will be business as usual in FPGA Land.

What are the factors to consider in preparing to polish our programmable logic crystal ball? 

Here are six things to think about. Wait – that is starting to sound like one of those click-bait headlines “Six things you didn’t know about Intel and Altera!” or “Intel thought they were doing a normal acquisition. What happens next will amaze you!” Yeah, No.

First, and most important, an FPGA company is not a “semiconductor” company. An FPGA company is a software and service company whose business model involves selling silicon. Take a look at the engineering teams at Xilinx and Altera. You’ll probably find that there are more engineers working there on tools and IP than on chip design. Both companies have huge armies of technical marketing engineers (TMEs) and field applications engineers (FAEs) to help handle the daunting task of making customers successful integrating FPGAs into their designs. The integrated tool suites distributed by both companies are some of the largest, most complex, and most comprehensive electronic design automation (EDA) systems in existence. Those tool suites are developed and supported by engineering teams that are probably similar in size to the biggest EDA companies.

Putting some LUTs and transceivers on a chip is not trivial, but it is only a very small part of what it takes to make an FPGA company (and its customers) succeed.

Looking at the impact of the Intel/Altera deal, then, one of the first things we hear is “Margin stacking!” Since Xilinx has to pay TSMC for chips, and then add their own margin to the price, there are two companies worth of margins in Xilinx’s prices. But if Altera and Intel become one company, they don’t have two companies’ worth of margins to deal with. They could drop prices! 

This margin stacking argument doesn’t seem to hold much water on closer examination, though. Margins are sacred in semiconductors, and nobody is likely to go willy-nilly cutting margins on one of the most lucrative technologies in semiconductors – especially not margin-savvy Intel. And, if you view the “fab” part of Intel and the new “Altera” part of Intel as two different organizations that both have to earn their keep (which they are), you really are still looking at the same situation on margins that Xilinx and TSMC face. Putting both operations in the hands of the same group of shareholders doesn’t really change what each piece needs to do to make a profit.

The second thing we hear is “size advantage.” There is some vague notion that Intel’s massive army of engineers and enormous resources will come crashing down and squash Xilinx like a bug. This too, is a bit of a silly view if you look at it logically. Intel is not going to suddenly re-deploy engineering troops to go work on FPGA-related stuff. Their engineers have jobs already. And most of them are not FPGA experts anyway. Engineers are not like Legos, where you can un-plug some from here and stick them in over there. And throwing a bunch of extra resources into a team that’s already working efficiently is one of the surest ways to cause confusion and slow things down.

The third thing buzzing around the internet is “fog of war.” The idea here is that Altera will be temporarily traumatized and distracted by the acquisition process itself, will lose focus, and will fall hopelessly behind Xilinx in the always-demanding race in which the two companies are engaged. There is even speculation that Altera would suffer attrition of key people in the transition. While this is always a concern in any acquisition, Altera seems unfazed so far. The company seems to be proceeding with its normal activities and exhibiting the usual amount of enthusiasm. Also, these deals tend to have mechanisms in place to persuade key talent to remain with the ship. There is no reason to believe that this one will be any different. 

The fourth observation we keep hearing is that Intel has a terrible record in acquisitions. The critics cite a long list of companies who were acquired by Intel and whose technologies seem to have simply disappeared from the map. While this may be true to some degree, Intel has done a few deals where they mostly left the company alone and let them operate almost as they did before. Wind River would be a good example. It is likely that Intel will give a good deal of thought to this one, since it is their largest and most expensive buy ever.

Number five, if we are right about Intel’s motivations in purchasing Altera, is that Intel will re-focus Altera on things critical to Intel, such as protecting the company’s dominant position in the data center processing business. If Intel does this, it could easily leave Altera’s other existing customers and markets neglected, feeding them right into the welcoming hands of Xilinx. This one is a possibility, but unlikely. Altera should be able to easily do what Intel needs in the data center without abandoning or neglecting their other businesses, and Intel wouldn’t be wise to alienate Altera’s current customer base by pulling the rug out from under them.

A sixth thing to think about is ARM. Many are speculating that Intel will pull the plug on Altera’s integration of ARM processors and insist on x86 architectures being used instead. Again, this would be a mistake. Intel will earn their margins on silicon with ARM architectures just like they do with x86. And, again, Intel would be doing serious damage to Altera’s existing customers if they suddenly forced a processor architecture change. All the software that has been developed for the existing Altera platforms would have to be re-done or moved over to a similar Xilinx device.

Of course, for the aforementioned data center applications, it would make sense to have a device with Altera FPGA fabric combined strategically with Intel x86 processors. But that device would have many additional design changes compared with today’s FPGAs. We believe a heterogeneous integrated processing platform that combines Altera FPGA fabric with Intel x86 processors could be a serious game-changer in the high-end processing world (data centers, high-performance computing, etc). Realizing that vision, however, will also require a significant leap in design tool and compiler technology. Tools need to be perfected that can make an entire FPGA design flow behave as a software compiler: legacy high-level code goes in, optimized FPGA/CPU architecture and binaries come out. 

While there are a plethora of other things to consider in predicting the outcome of this major change in the FPGA market, these six should be enough to get you started. What do you think will happen?

5 thoughts on “After Intel and Altera”

  1. Intel does a substantial amount of networking stuff too. NICs, switch silicon and what have you. If they were to start putting FPGA fabric there, it would make be a game changer for SDN.

  2. There isn’t a “one size fits all solution”, and this is where it’s going to get really fun.

    Over the last decade accelerated math FPGA’s ended up with a large number of DSP IP blocks optimized for certain FFT and matrix operations.

    Network and peripheral centric FPGA’s ended up with a large number of SERDES IP blocks optimized for communications operations.

    The glue logic, routing, and generic operations were left to be implemented as bit functions in the FPGA bit level fabric.

    The net result was an array of products optimized at both extremes, and some with a light blend of both in the middle targeting more tradition applications needing some math and some communications.

    It’s relatively easy to write “architecture aware” HLL C code compiled to hard FPGA logic that greatly accelerates highly parallel bit banging operations and data mashing/math operations.

    And only slightly tougher to implement high speed state machines with direct peripheral I/O’s that are just out of the reach of bit banging on a fast low end SOC.

    For the FPGA as an on board co-processor speeding up general purpose algorithms a new round of application specific IP blocks and bit level FPGA fabric are necessary. Like before, there will be some applications which will benefit from high numbers of SERDES or DSP IP blocks.

    High core count CPU’s with large L1 and L2 caches per core, beat down the parallel side of Amdahl’s law just fine.

    Beating the serial bottleneck side of Amdahl’s law for many traditional performance critical applications is going to require several things.

    The first are LOTS of small, single cycle, multi-ported, highly distributed 8/16/32 bit memories to bypass the memory bottleneck of traditional general purpose processors that are naturally serial bottlenecked at memories even with good caches.

    The second is a logic/data/routing fabric that is naturally 8/16/32 bits wide. While this can be built using current FPGA logic/routing fabrics, they are often inefficient at implementing data paths more than a few bits wide, and quickly exhaust critical resources if the design is more than 50% dense.

    The third are CLB’s that are more like a stackable 8bit ALU, easily implementing 16/32bit 2’s complement operations, with multiple bus wide routing resources on both sides.

    The forth are bus wide MUX’s on both sides of each CLB to handle more complex bus wide data selection, routing, and rotation operations that currently dominate CLB use when compiling normal C algorithms to FPGA logic.

    The FPGA coprocessor starts to look more like a sea of small, single cycle, pipelined processors embedded in highly distributed memories, in order to effectively beat Amdahl’s law on the serial bottleneck side.

  3. Intel is a poor player in the Architecture field. It certainly produced one of the most successful CPU architectures in the industry (besides the ARM). However, it always relied on its circuit design and semiconductor strength to improve performance. A simple example, is that multicore architectures are plagued with an efficiency disease where more than half of the chip is cache memory. This shows that Intel was unable to innovate in that segment. Wasn’t that obvious that adding more memory bandwidth hungry cores on a single chip will only exacerbate the CPU-memory gap?

    An FPGA is a very advanced piece of architecture which has a potential for diverse usage a lot more unpredictible than a processor. So my opinion is the Intel culture will clash with the Altera’s sense of architecture. I do not know whether they will be able to overcome it but it certainly requires a major culture change on Intel’s side.

  4. Intel will use it’s sales force to get FPGAs into places that Altera never sold into before. While Intel loves margins they can take lower margins with strategic accounts that Xilinx won’t be able to match. Also Intel makes boards. Altera and Xilinx only make prototype boards and don’t do very well at that. So Intel can takes FPGAs from sand to server systems. This will be very powerful and Xilinx better be ready for it. Intel also has 1000’s of PhDs working that might like to chime in on features and uses for FPGAs that have gone unexplored because Intel was not in the business. Intel also has lots of compiler people who can be used as a resource. Intel has many libraries that can be ported to FPGAs. Intel knows a lot about the data center that Altera and Xilinx don’t know and can bring that to the party.

  5. What will happen next is that Xilinx will get bought too. That seems to be the pattern in the semiconductor industry. If Broadcom and Altera can get bought so can Xilinx. Semiconductor industry is consolidating rapidly.

Leave a Reply

featured blogs
Nov 19, 2018
At the Linley Fall Processor Conference recently, the keynote on the second day was by Cliff Young of Google titled Codesign in Google TPUs: Inference, Training, Performance, and Scalability . The TPU is Google's Tensor Processing Unit. It is actually a family, since, as...
Nov 16, 2018
If you aren'€™t from the packaging world, then chances are likely you might think '€œpacking'€ refers to objects being placed in to a box instead of packaging integrated circuits (IC'€™s) on a substrate. Packaging has advanced since it began, and as with many areas i...
Nov 14, 2018
  People of a certain age, who mindfully lived through the early microcomputer revolution during the first half of the 1970s, know about Bill Godbout. He was that guy who sent out crudely photocopied parts catalogs for all kinds of electronic components, sold from a Quon...
Nov 13, 2018
Podcast Interview with the authors of The Hitchhikers Guide to PCB Design eBook by Mike Beutow with PCB Chat....