We all knew it was coming, but Tabula “officially” announced this week that they are producing their next yet-to-be-announced family of FPGAs on Intel’s 22nm Tri-Gate process. As one of the worst-kept secrets in the programmable logic industry, the Intel-Tabula relationship hardly comes as a surprise. The announcement was widely leaked about a year ago when Achronix formally announced a similar relationship with Intel.
What does it all mean?
We talked with Dennis Segers – Tabula’s CEO, to find out. “We’ve been in development with Intel for some time, and we have reached the point where we felt it was time to acknowledge the relationship and what it means moving forward,” Segers explains.
Before we jump into the “what it means moving forward” part, a bit of background: Tabula has raised what is probably a record amount of venture capital for an FPGA startup – over $100 million to date. That’s an impressive feat in today’s economy, and particularly in today’s venture capital environment. However, while that’s a huge war chest compared with typical FPGA startups, it’s less than 10% of the annual revenues of either Xilinx or Altera – the two leading FPGA companies and Tabula’s only direct competition.
As we’ve also pointed out a number of times in these pages – success with an FPGA startup requires at least four key elements: an innovative architecture, a competitive semiconductor process, effective design tools, and a high-quality support network for customers. Many startups get funded based on #1 alone – the novel architecture. Unfortunately, they typically fall flat on the last three elements, and we end up buying their office furniture at a wholesale joint a couple years later when the VC money runs out. A few have made it as far as element #2 – competitive process, but failed to have the endurance or funding to even approach the last two elements.
Tabula has a decent shot at the whole package.
“We brought our first-generation product out on 40nm, which was the leading-edge technology at the time,” Segers continues. “This engagement with Intel gives us the opportunity to be out front with this technology.”
Tabula’s enigmatic and novel architecture – called “Spacetime,” is billed as “3D”… which it is not. What it is, though, is pretty cool. Tabula uses a time-multiplexed FPGA fabric to increase the effective density up to 8x compared with normal FPGA look-up table (LUT) structures. Each LUT represents up to 8 different LUTs, time-multiplexed with a really fast clock that loads and re-loads LUT configurations as needed. The claim that the fabric is “3D” is based on the fact that place-and-route software pretends it is doing the design on a 3D chip, as each time slice of the FPGA is modeled as a different “layer” for layout purposes. The chip is still 2D, but the layout is done in three dimensions and then compressed into two via time-domain multiplexing. Got it? Breathe for a minute… OK. You can read our previous article on the Tabula architecture.
The important thing about Tabula’s architecture in this context is that it should lend itself well to the new Intel 22nm 3D “Tri-Gate” technology. Hmmm, there’s that 3D thing again. In Intel’s case, it really is 3D – at the transistor level. Instead of a typical planar transistor, Intel’s design includes a super-thin 3D silicon fin that rises up from the substrate. A gate is implemented on each of the three sides of the fin. Wait, three? Yep, one on each side and one across the top. On a normal planar transistor, there is just one gate – across the top. The result is that more current can be flowing when the transistor is on, and less when it is off. That gives us both faster switching and lower power. Intel claims a 37 percent performance increase at low voltage – when compared with 32nm planar transistors, or a power reduction of more than half at the same operating frequency. For FPGAs, where power consumption is probably the biggest hurdle to increasing capacity, this could be a huge advantage.
Segers says that the Intel technology gives Tabula a process advantage over competitive FPGAs in addition to the architectural gains it gets from the Spacetime architecture. The result should be an FPGA with higher density, faster operating frequencies, and lower power consumption – the trifecta of programmable logic goodness. Of course, we expect there to be compromises in that package, but we’ll have to wait for Tabula’s actual product announcement to find out where those compromises may be buried.
As to speculation that Intel is quietly sneaking into the FPGA space, we also chatted with Chuck Mulloy, spokesperson for Intel. According to Mulloy, Intel is taking very measured steps into the merchant fab business. They are engaging with a limited number of customers and working to be sure that their engagements are successful. He says Intel has not made a strategic decision to engage FPGA customers in particular. It is a coincidence that two of the customers who have made public announcements with Intel happen to be in the FPGA space.
Intel’s cautious entry into the merchant fab business seems prudent. While the company unquestionably has some of the most advanced semiconductor fabrication capabilities in the world, the difference between using those for your own products and making chips for other people is huge. It’s like the difference between being a world-class chef and running a successful restaurant. The chef needs to be able to produce amazing food. Running a restaurant requires a host of operations, management, customer service, and other considerations over and above culinary skills. OK, too much “food network”? Let’s get back to semiconductors.
In our four-point checklist for FPGA success, it seems that Tabula will have two of the checkboxes nicely filled in. In order to be successful, however, they need to check the last two – design tools and customer support. So far, Tabula has hung their hat on an innovative-but-risky cloud-based tool strategy to address both of those concerns. (We wrote about that strategy). The big questions are: First, how long will it take Tabula’s tool chain to mature? Will it reliably blast through customer designs with less-than-perfect HDL constructs and real-world design compromises? Will it hold up under the pressure of large, complex, high-performance designs? These are questions that are impossible to answer until a decent number of real customers have used the software for production design work. All of the lab testing in the world won’t tell you enough about how a design tool will perform in the real world of production design.
The bigger question with the cloud-based design strategy is customer trust. Will designers be comfortable using cloud-based tools where their design gets shipped across the Internet to somebody else’s server? While the rest of the universe has largely embraced cloud-based computing, engineers remain a somewhat paranoid and fickle bunch. Cloud-based security may be good enough for financial institutions to trust with tens of billions of dollars of transactions every day, but it may not be quite up to the task of protecting my really cool “one-half-hot” state machine design from my competitors’ prying eyes. Of course, my competitor has an engineer whose wife works at my company in the cube next to mine, but – let’s worry about the cloud security part.
If Tabula is successful in persuading designers to trust the cloud, they stand to make significant gains on the final checkbox – customer support. Having direct access to the customers’ design data and software from a central location could make AE support vastly faster and more efficient. A much smaller team of AEs could serve a much larger set of customers and do it faster, cheaper, and more effectively. All of that depends on the design tools working well and customers accepting the cloud model, however. So we’ll just have to wait and see.