feature article
Subscribe Now

One for All

Does a Standards-Based IP Strategy Hurt Competition?

In today’s global electronics economy and, more important, global techonomy, successful companies have to make careful decisions where they will differentiate and where they will standardize.  Nowhere is this more evident and crucial than in the fabless semiconductor arena.  Since fabless companies have (as the name implies) no fab, they logically do not have the ability to differentiate themselves from their competition by superior semiconductor technology. 

Of course, you wouldn’t know that if you listen to FPGA companies. 

We constantly hear from FPGA vendors that they – and only they – have chosen the perfect offering from the menu of merchant semiconductor processes.  Only they have correctly decided which of the three or four obvious options from one of the three or four obvious fabs is the one that will lead you, your team, and your design project to programmable paradise.  When they are first to get a chip delivered on a new process, they let you know.  When they choose a different process variant from their competitor, they explain why their choice was better.  For a half-hour or so, it works.  They have the advantage.  Then, the other guys do something new.

It’s easy to make fun of the FPGA companies.  They practically beg for it.  However, there are good reasons for their sometimes-confusing market maneuvers.  They need a place to hang their hat.  Since we all know that everybody has access to the same semiconductor fabs, and they can’t therefore really differentiate themselves based on semiconductor processes, what then can they do to build a better bit-trap?  FPGA fabric is basically all the same.  The major patents have expired.  A LUT is a LUT is a LUT.  Of course, the various vendors will also try to convince you that their LUT is bigger and more efficient than the competitors’ LUT because, you see, this carry chain has an extra output that, in certain situations, allows two of their LUTs to do the work of three of the competitors LUTS in 75.6% of the designs tested except those that behave azZzzzzz…..  zzzzzz… 

Oh, sorry – did you fall asleep there?  Yeah, we did too.  The truth is, FPGA vendors can’t differentiate themselves with their FPGA fabric either.  Xilinx and Altera, despite the protests we’ll hear (the comment box is located at the bottom of this page, guys), have basically the same LUT fabric.  Sure, there are subtle differences that might make your particular design work a tad more efficiently in one or the other, but I challenge anyone who doesn’t work for either company to give an example (again, see the comment box below) where the difference in fabric architecture makes a substantial difference in a real-world design.  Look at Lattice, and the story isn’t any different.  Achronix has a cool trick with asynchronous clocking (sounds like an oxymoron, doesn’t it?) Tabula time-multiplexes their LUTs.  SiliconBlue uses a different kind of memory cell, which allows them to make tiny, low-power chips very, very cheaply.  Actel, oops, I mean Microsemi, uses flash to make their configuration persistent.  All of these novel approaches offer some differentiation, but not enough to let any of these challengers take a significant bite out of the market share of the two dominant players.

So, if you’re keeping track, that means that nothing in the hardware design of the chips offers a significant differentiator (one that would make the market shift dramatically in their favor) for an FPGA manufacturer.  Similarly, software isn’t the place to put the nail in the competitors’ coffins, although it is a significant competitive barrier that the two big players have long used to fend off ferocious startups.  Xilinx and Altera both spend a huge fraction of their engineering budget developing, enhancing, and maintaining their design tool suites.  Smaller competitors tend to rely more on third-party EDA companies for most of the important bits – simulation, synthesis, and so forth.  Place-and-route, it turns out, is part of the cost of entry into the FPGA business, as there are no viable third-party offerings that can easily be adapted for place-and-route of a new FPGA architecture. 

Historically, tools actually are one of the key elements in generating brand loyalty for FPGA vendors.  In numerous surveys we’ve done of our audience over the past decade, “previous success/experience with tools” is the #1 reason design teams choose a particular vendor’s FPGAs.  Once an engineering team has cleared the learning hurdle with one vendor’s tools and tasted success, they’re reluctant to tackle the risk of changing vendors – even for a more compelling piece of silicon. 

So, if all FPGA companies are basically the same in process, fabric, and tools – the answer to differentiation must be in IP?  This week, meeting with Xilinx, we were treated to a comprehensive explanation of their IP strategy.  It turns out that the world’s leading FPGA company has decided to go with a completely open, standards-based, IP strategy.  Wait, WHAT?  You were supposed to be spouting forth superfluous platitudes about ecosystems and proprietary advantages of optimized blocks of industry-leading hierarchically-organized functionally-superior clock-accurate synthesis-compatible layout-proven vendor-certified intellectual property cores, and you give us industry standards?  


For designers, this is a big ‘ol basket full of good news.  It means that you’ll probably be able to pick the very best IP the industry has to offer.  Evaluate it in just about any tool on the market.  License it however your company likes to license IP cores.  Use it in your design (regardless of whether you’re heading for a custom SoC, an FPGA prototype, a production design using FPGAs, or a hobby project in your daughter’s garage robotics lab) and have a very good chance of success.  Xilinx isn’t doing any big-league world-class industry-leading engineering for all this.  They’re just putting into place a solid, common-sense, no-wheel-re-inventing, straightforward IP strategy.  Just like we wouldn’t expect an FPGA vendor to do. 

Xilinx’s standards-based IP strategy starts with plug-and-play interface standards like AMBA/AXI/AXI4, and Accelera/UVM.  For packaging, productization, metadata, and security, they’re using IEEEP1735 and IP-XACT.  For planning and cataloging, ChipEstimate and Design&Re-use. And, the whole thing is plumbed into the ISE design suite for inclusion in your next FPGA design.  The thing that’s the most innovative about Xilinx’s IP strategy is the fact that they didn’t try to be innovative.  They just relied on industry-proven sources, standards, and steps to make a wealth of high-quality IP available and accessible to their customers.  

Xilinx does plan to spend some cycles on IP verification.  The Xilinx Verificaiton Initiative (XVI) will take advantage of UVM and OVM to provide a common standard for silicon and IP development and verification.  By providing standards for IP interfaces, verification and certification standards and services, protection and security standards based on industry standards, and distribution standards, Xilinx is approaching an infrastructure that would enable the “app store” model for IP.  The one thing that’s missing is common licensing standards.

Does this IP strategy give Xilinx a competitive advantage?  Actually, no.  Any FPGA vendor could adopt the very same standards in the very same ways and level the playing field on IP.  What this does, however, is basically assure that no competitor will be able to use IP as a competitive advantage against Xilinx.  With all the world’s IP available via industry-standard mechanisms, Xilinx’s IP will act like the peloton at the Tour de France – you may be able to break away for a short period of time, but eventually the masses will bear down on you and absorb you back into the pack again.  You might as well save your energy. 

If IP is not the “secret sauce,” then how will FPGA vendors defend their perimeter?  The answer is simple and obvious.  Money.  While no single part of the FPGA business – silicon, architecture, tools, IP, or service – acts as a solid barrier to competitive entry, the cost of succeeding in all of those areas in parallel is an enormous wall.  FPGA startup Tabula, for example, has garnered over $100M in funding – which is a monumental investment for a semiconductor startup.  However, when compared with the war-chest of companies like Xilinx and Altera, it barely gives them a breath.  With the cost of designing a single new FPGA on a single IC process estimated in the range of $45 million, and with the big two FPGA companies both pumping through over a billion a year in cash, $100M seems a paltry sum.  Tabula will have to be clever, efficient, and lucky in order to succeed.  

Until somebody comes up with a new category of device that obsoletes FPGAs altogether (which hasn’t appeared on the horizon just yet), we’ll have an exciting battle – where competitors decide what areas are ripe for competition and differentiation – and what areas are best neutralized and standardized.  It will be fun to watch.

Leave a Reply

featured blogs
Jul 12, 2024
I'm having olfactory flashbacks to the strangely satisfying scents found in machine shops. I love the smell of hot oil in the morning....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured paper

Navigating design challenges: block/chip design-stage verification

Sponsored by Siemens Digital Industries Software

Explore the future of IC design with the Calibre Shift left initiative. In this paper, author David Abercrombie reveals how Siemens is changing the game for block/chip design-stage verification by moving Calibre verification and reliability analysis solutions further left in the design flow, including directly inside your P&R tool cockpit. Discover how you can reduce traditional long-loop verification iterations, saving time, improving accuracy, and dramatically boosting productivity.

Click here to read more

featured chalk talk

Trends and Solutions for Next Generation Energy Storage Systems
Sponsored by Mouser Electronics and onsemi
Increased installations of DC ultra fast chargers, the rise of distributed grid systems, and a wider adoption of residential solar installations are making robust energy storage systems more important than ever before. In this episode of Chalk Talk, Amelia Dalton, Hunter Freberg and Prasad Paruchuri from onsemi examine trends in EV chargers, solar, and energy storage systems, the role that battery storage integration plays in energy storage systems, and how onsemi is promoting innovation in the world of energy storage systems.
Jan 29, 2024