I’ll be the first to admit that I don’t understand cricket, but I’m told that cricket matches can sometimes run for days on end, and that it’s often unclear who’s winning until, suddenly and unexpectedly, one team meets some arbitrary victory conditions and then goes home to celebrate with a few pints. The FPGA synthesis game is a lot like that. It’s a high-stakes game with vast amounts of revenue at stake, not to mention some serious technology bragging rights, but it’s never really clear who’s ahead, who’s behind, or even what the rules were in the first place. It’s been going on that way for something like twenty years, and despite fierce competition and incredible technological advances in the tools, the scoreboard is still basically a bunch of gibberish. Nonetheless, the audience is stuck on the edge of their seats (Maybe they’ve been Super Glued there?) watching every move in this weird and wacky contest between tool nerds that have never met.
Before we can understand the FPGA synthesis scenario today, we should probably review a bit of history. In digital design, no implementation step is as critical or has as much influence over the final outcome of a design as synthesis. When we engineers write the source code for our design (HDL or some high-level description language), we suffer from the illusion that we are somehow in control – that if we write really clean and clever code, the resulting design will be better, faster, and more compact, and will use less energy. In reality, the code we write is merely a suggestion for our synthesis tool. The real outcome of our efforts depends on how well that tool interprets our code, divines our intent, and infers some circuitry that will honor our requests while adhering to a set of design rules we may not even know, in order to do the best job it can of achieving a set of optimization goals that we probably wouldn’t understand.
Two decades ago, when designs got so large that schematic-based entry of digital designs became unwieldy, the world switched to language-based design. But, in order for language-based design to be real, we needed logic synthesis. Synopsys, the world’s largest EDA company today, was arguably built by logic synthesis. The company’s explosive growth and impressive success record began in the 1990s with Design Compiler, a synthesis tool that became the de facto industry standard for years. Perhaps the name “Design Compiler” belied the complexity and sophistication of the algorithms involved in synthesis. By choosing the word “compiler,” Synopsys sent the message that what was going on here was at a similar level to the software compilers that turned our C code into machine instructions. Nothing could be further from the truth, and as the years passed and the technology advanced, the “compiler” metaphor only got worse. Today, virtually no logic design is done by humans. We could safely argue that 99.99% of the logic circuits in the world were created by algorithms, or – if we’re generous – only indirectly created by the handful of humans whose heuristics guide the algorithms as they interpret our code and infer massive blocks of circuitry that we hope will do what we want.
The business success Synopsys achieved with Design Compiler was the envy of the EDA industry. Every EDA executive and entrepreneur on Earth began a desperate quest to recreate that magic – to find that one franchise-maker tool that could propel their five-person project team to Fortune 500 status on the coattails of one mission-critical solution for the entire electronics industry. Design Compiler’s success was in ASIC design, so when FPGAs arrived on the scene and became the heirs apparent to the custom chip crown, the answer seemed obvious. Whoever could create the best FPGA synthesis tool would win. Players large and small threw their hats into the ring. Synopsys looked like a favorite as they created the FPGA equivalent of the wildly successful Design Compiler. Mentor Graphics set out to save their struggling synthesis program by creating an FPGA version of their AutoLogic tool, and a whole wave of startups appeared on the scene, all aiming to claim the prize, for one lesson the world had learned from Design Compiler was, “There is no second place.” Once a critical tool achieves a winning position, the rest is like the end of a game of Monopoly: the rich get steadily richer, and the poor gradually fail, until finally only one player is left standing.
Unfortunately, in studying the road map of Design Compiler’s voyage to success, there were a few key landmarks that the industry experts missed. In the ASIC market, third party synthesis (tools supplied by an independent EDA company rather than by the chip makers themselves) succeeded largely because of economy of scale. The EDA company could invest the resources to create a world class synthesis tool, and those engineering costs could be easily amortized over a large constellation of ASIC vendors and technologies. No single ASIC vendor had the resources to create their own tool that could match the sophistication of those created by the EDA experts. And, there was a knock-on benefit: by standardizing the industry behind one synthesis tool, your design could become largely technology- and vendor-independent. If you needed to switch to a different ASIC supplier, or wanted to target your design so that multiple suppliers could build it, the synthesis tool provided the firewall that kept any chip vendor from locking you in. Furthermore, as they quietly worked behind the scenes (this will become important later), it had already been established that place-and-route, the next step in ASIC design after synthesis, was in the domain of third-party EDA suppliers rather than the ASIC vendors themselves.
In the nascent FPGA ecosystem, some of those things were different in subtle but staggeringly important ways. First, there was no economy of scale to match the ASIC tool market. Where there had been dozens of competitive ASIC vendors, in FPGA there really were only two: Xilinx and Altera. And where many ASIC companies had been happy that the vendor-independent design facilitated by third-party tools gave them an opportunity to steal business from their competitors, the ferociously feuding FPGA duopoly wanted nothing to do with vendor-independent design. They grabbed any and every opportunity to lock their customers into their products and technology. Finally, because the EDA industry had largely ignored FPGA technology when the original place-and-route tools were being developed, the two major FPGA companies already supplied their own place-and-route tools, effectively locking the EDA industry out of the back end of the design flow.
As often happens, the bright-eyed startups crushed the big established tool companies in the innovation race to capture the emerging FPGA synthesis market. Three notable startups, Synplicity and Exemplar Logic in the Bay Area, and ISI from Grenoble, France, produced FPGA synthesis tools that handily beat those from the big EDA companies as well as the first efforts at tools from the FPGA vendors themselves. Of those, Synplicity quickly established market leadership. Exemplar Logic and ISI were acquired by Mentor Graphics and Xilinx, respectively, to bolster their floundering FPGA synthesis efforts. Soon, as the pundits had predicted, Synplicity bubbled to the top and essentially took over the game. They became not only the dominant third party synthesis supplier, but also the supplier to the FPGA vendors themselves, as the FPGA companies offered detuned OEM versions of Synplicity’s tools as part of their standard design tool packages.
Unlike the Design Compiler story, however, the Synplicity situation was not stable or sustainable. This was for several reasons. The two major FPGA companies were not content to allow a third party to control such a critical piece of differentiating technology. If you had a better synthesis tool, your chips performed better, and the opportunity to gain or lose competitive advantage because of synthesis was enormous. From one year to the next, if Synplicity happened to do a better job supporting either Xilinx or Altera’s new technology, it could have catastrophic effects on the other vendor. So both vendors set about developing their own synthesis technologies with the long-range goal of unseating Synplicity.
The FPGA vendors had two big advantages. First, they could give their tools away essentially for free in order to make money on the silicon sale. Synplicity had to make a business out of tools alone, so they were forced to sell synthesis tools with five digit price tags against zero cost alternatives from the FPGA vendors. Second, because the FPGA vendors controlled the place-and-route technology, they were in a better position to do the physical optimizations that would become more and more important as FPGA technology advanced and routing delay took over a larger and larger share of the performance picture.
Nonetheless, Synplicity stayed on top for years with the simple strategy of outrunning the competition. In customer benchmarks, the company could consistently show that their tools outperformed the “free” vendor tools by a significant margin. The marketing message was simple — if you cared about the quality of your design, you bought Synplicity. The FPGA vendors were patient and determined, however. Supported by multi-billion-dollar chip revenue streams, Xilinx and Altera staffed their tool organizations into the hundreds. Synplicity, on the other hand, had gone public and was now accountable to shareholders for posting quarterly profits and showing aggressive growth, an environment that does not lend itself well to a long-term engineering investment in a single product. In order to be seen as a viable public corporation, Synplicity needed to expand and diversify their product portfolio, and their engineering talent was stretched thin and defocused as a result. The cash flow from Synplicity’s synthesis products was pumped into other emerging product lines, and while that was happening, the FPGA vendors’ synthesis technology inched ever closer to parity.
Finally, Synplicity was acquired by Synopsys, probably primarily because of the advanced HAPS prototyping technology Synplicity had recently acquired from Hardi. Folded into the big purple mass of Synopsys, the Synplicity identity slowly melted into obscurity.
For the next few years, the FPGA companies continued to gain ground. More and more customers saw the FPGA vendor-supplied tools as “good enough,” and smaller and smaller percentages relied on third-party EDA solutions. During this time, a new opportunity emerged and died almost as quickly. Just like schematic-based design had done, RTL-based synthesis began to run out of gas in its ability to support the complexity of the largest new designs. One of the key technologies for overcoming and managing this complexity is high-level synthesis (HLS), and the combination of the productivity of HLS with the fast, low-risk implementation of FPGA was truly serendipitous. Once again, there was an opportunity for the EDA industry to own a key, lucrative component of the FPGA design flow. But even though EDA companies had an incredible lead in HLS technology, they had clearly had enough of the FPGA scene. EDA gave only a passing shrug to the enormous potential of HLS with FPGAs, and, as EDA casually watched, the FPGA companies quickly acquired and built high-level design technologies of their own, assimilating them into their already rapidly improving tool suites.
Today, Xilinx and Altera both offer newly upgraded implementation flows that include their own HLS, synthesis, and place-and-route technologies integrated into sophisticated and productive design flows. At the same time, Synopsys continues to develop, market, and sell the former Synplicity tools to a segment of the market that appreciates their vendor-independence, special capabilities in areas like high-reliability design, and sometimes-better quality of results and performance.
It’s virtually impossible to run any test that would say with certainty which of these solutions is superior. The companies themselves all run benchmark suites consisting of hundreds of customer designs, and they measure their development success usually in single percentage point improvements on some subset of that test suite. Just about every new optimization that improves the results on twenty designs hurts the results on five others. It’s truly a game of inches, and no one even understands how to build a reliable ruler.
Meanwhile, we all sit here in the bleachers glued to our seats, watching the claims and counterclaims of “20% better this” and “4x more of that,” with no real way to tell if our favorite team is winning or losing. We just hope that when we get home and fire up our latest design, we aren’t left with too many timing violations.