feature article
Subscribe Now

Three Legged Stool

Process, Tools, and IP

No matter what we’re trying to design these days, we depend on three fairly distinct elements to get our system, circuit, board, or chip working and ready for action. It doesn’t really matter what you’re designing, either. It can be a single IP block, a subsystem-on-a-chip, a whole custom chip, a board, a box with many boards, or a whole complex system made up of many different major components. In each case, we need the supporting process technology, the right tools to do the design work, and the IP that allows us to add new and innovative things to our system without having to redesign the whole thing starting back from primordial goo.

If you design with FPGAs (as many of you do) all three of these elements are available from the vendor. The FPGA companies work hard to provide all of these key elements so their customers will get working chips done faster and get to the point of production-volume purchases where they start to earn their real money.

If you’re in the high-flying business of custom chip design, these three elements are each bearing tremendous weight. Semiconductor houses like TSMC, Samsung, IBM, GLOBALFOUNDRIES, Intel, and others spend literally billions of dollars racing to each new process node. This game has life-or-death stakes for these companies, and the amount of engineering required to support each new node grows exponentially. Everybody designing chips seems to understand and accept this, however, and we adjust our budgets upward every couple of years if we plan to keep ourselves in the custom chip game. 

Likewise, IP companies like ARM are constantly working to come up with new, more sophisticated architectures – and working to make sure that those architectures work on the latest semiconductor processes. Companies doing custom chips are licensing more – and more expensive – IP with each new generation, and the old not-invented-here syndrome is giving way to a culture of increasing IP use. It is well understood that we can’t design billions of transistors on our chip all by ourselves. We need to start with some big pieces already designed, debugged, and working in order to have a chance at a working chip in our lifetime.

Somehow, though, a lot of us have a mental block when it comes to the EDA tools part of the equation. We understand that each new process generation dramatically increases the complexity of design. In the latest few generations, we have optical proximity correction, double-patterning, FinFETs, leakage current issues, and many other new challenges – and those are just at the lowest level. These changes have a profound impact on the EDA tools, and EDA companies invest enormous amounts of engineering time and talent reworking their tools to deal with them.

But when it comes time to tool up for our next design, a lot of us expect that we’ll just magically get new tools (maybe even just for our meager maintenance payments) that will miraculously handle all these new issues for us. The economics of EDA simply don’t support that assumption. Every time Moore’s Law takes a step forward, chip design gets more expensive and fewer companies are able to participate. That means fewer companies are in the market for the latest design tools. To support this smaller number of customers, EDA companies have to invest ever-larger amounts of time, money, and engineering to keep their tools working on the latest technology. It’s easy to see that this path leads to a bad place for EDA. Building more and more complicated tools for smaller-and-smaller audiences leaves only one variable – the cost of tools must go up.

Many companies don’t automatically expect to scale what they’re willing to pay for tools, however. If you look at the cost of EDA tools, the price of the most expensive tools required for chip design has remained about the same for the past three decades. Think about that for a minute… Is there anything else in chip design that has remained the same price for three decades? Certainly not overall NRE. 

EDA companies get some respite from this now because of the size of the computing problems in today’s huge designs. In order to get the bazillions of compute cycles required for all the layout, verification, and simulation runs involved in a typical chip design, you need a big-’ol farm of servers cranking away 24 hours per day. All those servers eat up a lot of tool licenses, so even though the cost of the tools isn’t changing that much, the number of licenses continues to increase.

Still, EDA companies have to be very smart in order to stay viable on this high-stakes third leg of the stool. We talked with Wally Rhines – vice chairman of EDAC (the international consortium of EDA companies) and chairman and CEO of Mentor Graphics about this in a recent briefing on the Q4 2012 financial results across the EDA industry. EDA has been showing consistent growth despite these challenges – even in the most technologically challenging areas like CAE. EDA employment also continues to rise. However, the CAE segment (which is the largest segment of EDA revenue) accounted for an industry-wide total of only $693 million in Q4 2012. While this may sound like a big number, and while it represents growth for the industry – it is a pittance compared with the cost of the other two legs of the stool. IC Physical Design and Verification – the segment whose cost is steadily increasing – decreased to a quarterly total of just over $360 million.

EDA, considering its vital role in our design process, is dramatically underfunded.

We talked with Chi-Ping Hsu – senior VP of R&D at Cadence – about how Cadence is dealing with this issue. Hsu explains that Cadence is working to partner more closely with foundries like IBM, Samsung, and TSMC to assure that the tools are up to the task of dealing with the latest design challenges – particularly technical discontinuities like FinFET transistors and double-patterning. They recently announced a long-term agreement to collaborate with TSMC on advanced node development. Hsu also explained that Cadence is working hard with the second leg of the stool – IP partners like ARM. Cadence, ARM, and TSMC also recently announced the first Cortex-57 test chip fabricated on TSMC 16nm FinFET technology. Hsu pointed out that this is a giant step in complexity beyond what the companies had accomplished to date, and that the level of collaboration required was significant.

Cadence is investing heavily in the next and after-next process nodes on the bet that they’ll be able to get and hold an advantage on those advanced processes when mainstream design moves there. The other EDA companies are sure to follow suit, and the cost across the industry will be significant. Without brilliant engineering on the tool front and the financial resources to support it, the world might never make that transition. It’s time we thought of tools in the same way that we think of the other key components of our design infrastructure and acknowledged the fact that we’ll need to scale our investment similarly. Otherwise, we’ll be left on a two-legged stool – and we all know how that would turn out.

Leave a Reply

featured blogs
Sep 20, 2021
As it seems to be becoming a (bad) habit, This Week in CFD is presented here as Last Week in CFD. But that doesn't make the news any less relevant. Great article on wind tunnels because they go... [[ Click on the title to access the full blog on the Cadence Community si...
Sep 18, 2021
Projects with a steampunk look-and-feel incorporate retro-futuristic technology and aesthetics inspired by 19th-century industrial steam-powered machinery....
Sep 15, 2021
Learn how chiplets form the basis of multi-die HPC processor architectures, fueling modern HPC applications and scaling performance & power beyond Moore's Law. The post What's Driving the Demand for Chiplets? appeared first on From Silicon To Software....
Aug 5, 2021
Megh Computing's Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization   Because Megh's ...

featured video

Product Update: Complete DesignWare 400G/800G Ethernet IP

Sponsored by Synopsys

In this video product experts describe how designers can maximize the performance of their high-performance computing, AI and networking SoCs with Synopsys' complete DesignWare Ethernet 400G/800G IP solution, including MAC, PCS and PHY.

Click here for more information

featured paper

Keep Your System Up and Running With a Single Supercapacitor

Sponsored by Maxim Integrated (now part of Analog Devices)

This design solution presents a novel solution for backing up system power in both battery and line-powered systems. The elegant architecture runs from a single supercapacitor, provides a tightly regulated 5V output at up to 3A, and features 94% efficiency.

Click to read more

featured chalk talk

Seamless Ethernet to the Edge with 10BASE-T1L Technology

Sponsored by Mouser Electronics and Analog Devices

In order to keep up with the breakneck speed of today’s innovation in Industry 4.0, we need an efficient way to connect a wide variety of edge nodes to the cloud without breaks in our communication networks, and with shorter latency, lower power, and longer reach. In this episode of Chalk Talk, Amelia Dalton chats with Fiona Treacy from Analog Devices about the benefits of seamless ethernet and how seamless ethernet’s twisted single pair design, long reach and power and data over one cable can solve your industrial connectivity woes.

Click here for more information about Analog Devices Inc. ADIN1100 10BASE-T1L Ethernet PHY