In 1999, DAC (the Design Automation Conference) was in New Orleans. The industry was at the height of its growth, and, when you got off the plane, it looked as though at least a third of the cabs had illuminated Synopsys advertisements on their roofs. There were almost 250 exhibitors, many of them recent start-ups, and it took forever to get from the show booths to the demo booths. In the evening, DAC vendor parties were everywhere, and, despite the humidity and heat, it was a wonderful time to hear about new ideas.
A week or so ago I looked at the Electronic Design Automation Consortium (EDAC) membership page. There were fewer than fifty members. What is happening? Gary Smith says that EDA sales are growing, with nearly 9% growth in 2010, and he predicts an 8.2% Compound Average Growth Rate (CAGR) for the next five years. But there seem to be far fewer start-ups. Where are they? The best person to ask seemed to be Steve Pollock. Steve has been in the EDA industry since before it was an industry and was for many years the chairman of the EDAC Emerging Companies Committee – that is, companies whose annual turnover hasn’t yet reached $5million. His day job is as VP of Marketing and Business Development for S2C, Inc. I have also chatted with a number of other people in the industry and watched a video from a DAC 2011 panel.
I am not going to quote Steve verbatim, and he may not always agree with everything I am going to say, but his thoughts certainly have influenced mine.
What are seriously missing in the industry today are new start-ups. One of the joys of the EDA business used to be bumping into someone who had been fairly senior in one of the EDA giants and was now driving his own start-up (they were nearly always men), usually with a point tool that would provide something that the mainstream tool flow lacked. These guys were always enthusiastic, hardworking, and devilishly convincing that the industry needed their product. They usually managed to attract funding, partly because they didn’t need a huge amount of cash and partly because there was a precedent of start-ups that had attracted significant returns on the initial investment – ten times the investment was not unusual. VCs might not have had detailed understanding of the technology, but they did know a good deal when they saw it. These returns were not normally through a public flotation but by a buy-out by one of the larger companies, with Cadence a particular player. Usually the senior staff were given some form of Golden Handcuffs, which linked their rewards to the performance of their old company and locked them into the larger structure for a fixed period. In many cases they would serve the required two or three years and then start another business.
From many points of view, this was a healthy system. Mainstream players could cherry-pick the successful products without the risk and cost of in-house development, and the people prepared to take a risk were able to get a healthy reward. But today there are very few of these start-ups. Why?
There is no simple answer, but a number of different things have changed. One is that the VCs are no longer that interested. After the dotcom bubble burst in 2000, a lot of investors conflated all technology companies as poor risks. Now that we are living through what appears to be a possible second dotcom bubble, the VCs seem unable to find the relatively small amounts needed for EDA, compared with the huge sums they are investing in the hope of finding the next Facebook or whatever. A second factor is that the exit route of take-over is not as easily available: acquiring companies are striking much tighter bargains when they do buy, and the investors are lucky to see a return of three or four times their investment. While this might seem a good return, for investors, the one start-up that pays off has to pay for the losses on the other three or four that fail.
It is also clear that there are fewer and fewer niches for point tools. The tool stream may not be perfect, but there are few gaps, and with the increasing complexity of the process, the gaps are climbing higher up the abstraction level chain and presenting greater technical challenges.
The Cadence EDA 360 manifesto (which we have looked at on more than one occasion) draws attention to software and its importance in future ASICs and, particularly, in future SoCs. At the same time, Gary Smith’s analysis shows the most vigorous growth coming from what he classifies as ESL. Electronic System Level has been bandied around for many years as the way forward for designing and verifying complex systems at a fairly high level of abstraction, even if there is not complete agreement on what the term means. In EDA 360 terms, we are looking at aspects of System Realisation. And this is one area where we can see independent companies operating, even if few of them fall into the category of start-up.
ESL in this context includes many things, such as the use of high-level languages, high-level synthesis, virtual models of hardware and software, physical prototyping, and software/hardware co-development. OK – very loose use of ESL perhaps, but this seems to be close to Gary Smith’s view. And Gary sees this area as growing at close to 28% CAGR for the next five years. The importance of the area is shown by a number of recent announcements: one that caused a certain amount of comment was Calypto acquiring the Catapult C synthesis tool from Mentor Graphics, to add to the SLEC System-HLS verification tool.
An area that is receiving a lot of attention is the way in which prototypes, both real and virtual, can be used to debug the hardware and to allow hardware/software co-design – or, at the very least, software development to start before the hardware design is complete and in silicon.
FPGA platforms for prototyping ASICs/SoCs have been used for some years now- the very first were the size of a family-model refrigerator, used a ton of power, and required very complex partitioning of designs (dividing the ASIC into bite-sized chunks, small enough for a chunk to fit into an FPGA). Because of both the speed of the FPGAs and the need for a lot of traffic between chips, these systems were at least one order of magnitude slower (and sometimes a lot worse) than the final silicon. They were also expensive. As FPGAs have grown bigger, faster, and, in proportion, much less power-hungry, things have improved.
It is now a lot less difficult to produce prototypes on FPGAs. But why should you? Steve Pollock’s S2C website gives four reasons:
1) Complex SoCs are difficult to simulate, so running an SoC design on an FPGA prototype gives greater confidence in the design’s functional accuracy.
2) Following from this, if you are using a lot of IP, prototyping is a way of ensuring that the IP blocks play nicely together.
3) As soon as you have a prototype, you can start exercising firmware and software – increasingly important as both get more complex.
4) You can show a possible customer the design in action well before tape-out.
(If the prototyping system is cheap enough or the customer is going to pay a lot, they can even have their own prototyping system to start product integration.)
S2C is not the only player in this area: there are a number of companies offering prototyping boards of some degree of sophistication or other. Synopsys, for example, has boards from its HAPS acquisition and tools from Synplicity. It has even cooperated with Xilinx on a book on FPGA prototyping. And silicon companies have often built their own in-house systems.
In last week’s announcement that Xilinx is shipping its massive Virtex-7 2000T devices (see Moore Passing Zone), one of the customer case studies used in the press briefings was a company (not named) which had been planning to build 10 prototyping systems, each using 64 FPGAs to model 10 ASICs. Instead, they are building systems using the Virtex-7 2000T. Each system will use 16 FPGAs for 13 ASICs, reducing partitioning issues, lowering power, radically increasing performance, and decimating cost. The cost reduction is enough for them to build 200 systems for the system developers and software guys.
A slightly different approach to the problem is that of Eve, which uses massive arrays of FPGAs to create its ZeBu-Servers to run emulators for hardware/software co-verification. Emulation and virtual prototyping are alternatives to physical prototyping. With virtual prototyping, models of the SoC are run in a computer- or server-farm. This allows high-level architectural exploration and then system evaluation and hardware/software co-design and co-verification. Cadence offers a Virtual System Platform (VSP). (The VSP is offered alongside the Palladium XP Verification Computing Platform and the Incisive Verification Platform – just thought you would like a platform of platforms). The VSP uses TLM (Transaction Level Modelling) and System C to build the prototypes. It uses processor models from Imperas and ARM as well as having a library of TLM models and accepting RTL input. It executes C, C++, and assembler programmes.
In this area, building models and developing modelling techniques is a possible area for start-ups (Imperas is only just over three years old). And there are other small companies operating in the high-level area; for example, France-based DOCEA, which provides power-modelling tools for high level estimates and to inform hardware/software trade-offs in the initial design.
Despite ESL having been discussed for years, from the outside it looks as though the big boys in EDA are still developing their strategy and tools. If so, then the door is not yet closed against new ideas from those start-ups that can jump the financial hurdles. So with that thoroughly mixed metaphor, maybe we can conclude that the fizz hasn’t entirely gone from EDA after all.
I have mixed feelings about the EDA industry. Have I missed any positive signs? If so, please tell me.