feature article
Subscribe Now

DAC Previsited

Dawn of the Design Tool Decade

Exactly 299 days before “Cramming More Components onto Integrated Circuits” was published in Electronics Magazine, the first workshop of the SHARE Design Automation Project was held in Atlantic City, New Jersey. The SHARE workshop had papers with titles like “A method for the best geometric placement of units on a plane” and “Design automation effects on the organization.” The magazine article began with the statement “The future of integrated electronics is the future of electronics itself.”

With such generic paper titles and such an auspicious article intro, what has transpired since then that has inexorably linked those two seemingly obscure technical publication events? Pretty much everything.

15,071 days later (this coming Monday, in fact,) the 43rd annual Design Automation Conference (DAC) will kick-off in San Francisco, California.

The SHARE design automation workshop, held in Atlantic City in June 1964, is now counted as DAC #1. The Electronics Magazine article, published less than a year after that seminal if inauspicious design automation event, contained a small section that threw down the gauntlet to the fledgling design automation industry:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.”

In other words – “Hang onto your hats, design automation dudes! We integrated circuit folks may have managed to fabricate only a 100-bit shift register with 600 transistors so far, but by the time DAC #11 rolls around in 1975, there could be almost 65,000 transistors on a chip. We don’t plan to be designing all 65,000 of them by hand, so we’re gonna need a little help from those computer programs of yours. Beyond that, don’t even think about what’s going to happen by DAC #43 in 2006. We’re talking billions!”

OK, maybe Gordon Moore’s words were a little more reserved than mine, but I’ve had the advantage of over 40 years to think about what he wrote in that Electronics Magazine article, and… you know… I think he might have been onto something there.

Mr. Moore continued, “The total cost of making a particular system function must be minimized. To do so, we could amortize the engineering over several identical items, or evolve flexible techniques for the engineering of large functions so that no disproportionate expense need be borne by a particular array. Perhaps newly devised design automation procedures could translate from logic diagram to technological realization without any special engineering.”

For the next forty years, EDA did exactly what Gordon Moore had prescribed. They created an industry around procedures that could translate from logic diagram to technological realization. That industry saw the rise and fall of countless small and medium-sized companies, each of whom contributed critical technology to the art and science of design automation. The decades of DACs chronicled the rise and fall of those enterprises, and the techniques pioneered by most of them are in some form held in state as intellectual property of one of the three or four major EDA companies operating today.

In fact, the design automation industry has been the sugar daddy of semiconductors. For four decades, they’ve provided the tools and techniques that have given meaning to Moore’s Law. Ask yourself – what good are billions of transistors on a single chip without the design automation tools that allow us to engineer products that can harness the power of all those transistors, and without verification tools that can be sure that all those transistors will work as intended when the final chips are fabricated?

Interestingly, as EDA has executed on Moore’s orders, the unsolved and interesting problem space has squeezed out both ends of the process. By the mid-1990s, the design process “from logic diagram [register transfer-level description] to technological realization [completed layout]” was reasonably mature and almost 100% automated for most digital applications. The biggest challenges today are before that process begins – generating something akin to RTL descriptions from a more compact description of a device’s desired behavior, and after that process ends, – modifying the layout to be manufacturable with today’s nanometer geometries and verifying that the whole thing is likely to work before millions are dropped on masks and NRE. Now adorned with the labels “ESL” (Electronic System Level [Design]) and “DFM” (Design For Manufacturability), the two technologies are likely to be at the forefront in this 43rd edition of the Design Automation Conference.

So, why am I, the author of last year’s infamous “Ditchin’ DAC” article, now claiming that the conference is important? I think the time has clearly come for the design automation industry to answer a new challenge. Forty-two years of getting from logic diagram to technological realization has borne its fruit. We are now quite good at that process, even for “logic diagrams” that specify billions of transistors worth of functionality. However, the problem has changed significantly, and the industry has not wavered from its previous path in order to address the new reality.

For much of the life of DAC, there was a singularity in the audience, the customer, and the presenters and exhibitors. Most of those presenting technical papers on design automation worked in CAD and EDA groups at large companies, where they helped their designers develop and implement new systems in silicon. Then, with the advent of the ASIC industry, the crowd split in two: engineers that worked at ASIC companies, and engineers that worked at systems design houses that used the ASIC vendors’ services. Suddenly there were two distinct audiences for DAC with different interests. System designers wanted to know what tools they could use to get their jobs done more efficiently, and ASIC vendors wanted to know new approaches and algorithms for creating those tools.

On the heels of the ASIC industry came the commercial EDA industry, and now DAC attendance divided once again – semiconductor people from ASIC suppliers, EDA engineers working on commercial design automation, and system people from large electronics companies who were customers of both the ASIC suppliers and the EDA companies. Over time, the EDA companies came to dominate the trade show, and they had to split their messaging between topics of interest to ASIC vendors and topics of interest to end users, as both groups were now customers of the EDA industry. The dividing line came at the point of tapeout, with any process that preceded tapeout falling in the “system house” category and any process that came after tapeout being addressed to the ASIC vendor community.

Just as this scenario started to settle out, another transformation trickled through the industry – the advent of fabless semiconductor companies, mega-fabs, and field-programmable technologies such as FPGAs. These new categories divided the DAC audience yet again. Furthermore, segments of the audience began to become disenfranchised as DAC struggled to chart its course through waters muddied with customer-owned tooling (COT), analog design, PCB design, deep-submicron physical verification, system-level modeling and simulation, IP-based methodologies, and free-from-the-vendor FPGA design tools. DAC grew so broad and thin that no audience (with the possible exception of hard-core ASIC designers) could get a full travel-budget worth of good information by attending. There was something there for everyone, but not enough of anything for most.

For the future, however, the DAC dilemma is even larger. At the back end of all things are the challenges faced by the relatively small (and shrinking) crowd dealing with transistor-level challenges of physical implementation with today’s 90nm, 65nm, and 45nm and smaller geometries. These customers require deep, focused, expensive technology, and they’re willing to pay for it. The weight of Moore’s law rests on their shoulders.

At the front of the design process, however, the horizons are broadening and the audience is growing. As electronic system design moves farther from the transistor and the RTL description and closer to the behavioral, software/hardware agnostic level, there will be an increasing consolidation of software and hardware design, signal processing and embedded computing, field-programmable and ASIC methodologies, and system-level and detailed-level design. Productivity demands and new design methodologies will break down the walls between these specialties, giving rise to a new generation of system designers capable of creating systems with multi-billion transistor complexity in a matter of days to weeks.

These designers will not have time to concern themselves with the specifics of RTL design, timing closure, gate-level modeling, or HDL simulation. Their focus will be on surfing the superstructure of today’s robust design automation techniques to get more capable products to market faster and cheaper. Next week, the 43 rd DAC will showcase many of the technological threads that will eventually be woven into the fabric of those new methodologies. Watch and listen. You can hear the revolution coming.

Leave a Reply

DAC Previsited

Dawn of the Design Tool Decade

With such generic paper titles and such an auspicious article intro, what has transpired since then that has inexorably linked those two seemingly obscure technical publication events? Pretty much everything.

15,071 days later (this coming Monday, in fact,) the 43rd annual Design Automation Conference (DAC) will kick-off in San Francisco, California.

The SHARE design automation workshop, held in Atlantic City in June 1964, is now counted as DAC #1. The Electronics Magazine article, published less than a year after that seminal if inauspicious design automation event, contained a small section that threw down the gauntlet to the fledgling design automation industry:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.”

In other words – “Hang onto your hats, design automation dudes! We integrated circuit folks may have managed to fabricate only a 100-bit shift register with 600 transistors so far, but by the time DAC #11 rolls around in 1975, there could be almost 65,000 transistors on a chip. We don’t plan to be designing all 65,000 of them by hand, so we’re gonna need a little help from those computer programs of yours. Beyond that, don’t even think about what’s going to happen by DAC #43 in 2006. We’re talking billions!”

OK, maybe Gordon Moore’s words were a little more reserved than mine, but I’ve had the advantage of over 40 years to think about what he wrote in that Electronics Magazine article, and… you know… I think he might have been onto something there.

Mr. Moore continued, “The total cost of making a particular system function must be minimized. To do so, we could amortize the engineering over several identical items, or evolve flexible techniques for the engineering of large functions so that no disproportionate expense need be borne by a particular array. Perhaps newly devised design automation procedures could translate from logic diagram to technological realization without any special engineering.”

For the next forty years, EDA did exactly what Gordon Moore had prescribed. They created an industry around procedures that could translate from logic diagram to technological realization. That industry saw the rise and fall of countless small and medium-sized companies, each of whom contributed critical technology to the art and science of design automation. The decades of DACs chronicled the rise and fall of those enterprises, and the techniques pioneered by most of them are in some form held in state as intellectual property of one of the three or four major EDA companies operating today.

In fact, the design automation industry has been the sugar daddy of semiconductors. For four decades, they’ve provided the tools and techniques that have given meaning to Moore’s Law. Ask yourself – what good are billions of transistors on a single chip without the design automation tools that allow us to engineer products that can harness the power of all those transistors, and without verification tools that can be sure that all those transistors will work as intended when the final chips are fabricated?

Interestingly, as EDA has executed on Moore’s orders, the unsolved and interesting problem space has squeezed out both ends of the process. By the mid-1990s, the design process “from logic diagram [register transfer-level description] to technological realization [completed layout]” was reasonably mature and almost 100% automated for most digital applications. The biggest challenges today are before that process begins – generating something akin to RTL descriptions from a more compact description of a device’s desired behavior, and after that process ends, – modifying the layout to be manufacturable with today’s nanometer geometries and verifying that the whole thing is likely to work before millions are dropped on masks and NRE. Now adorned with the labels “ESL” (Electronic System Level [Design]) and “DFM” (Design For Manufacturability), the two technologies are likely to be at the forefront in this 43rd edition of the Design Automation Conference.

So, why am I, the author of last year’s infamous “Ditchin’ DAC” article, now claiming that the conference is important? I think the time has clearly come for the design automation industry to answer a new challenge. Forty-two years of getting from logic diagram to technological realization has borne its fruit. We are now quite good at that process, even for “logic diagrams” that specify billions of transistors worth of functionality. However, the problem has changed significantly, and the industry has not wavered from its previous path in order to address the new reality.

For much of the life of DAC, there was a singularity in the audience, the customer, and the presenters and exhibitors. Most of those presenting technical papers on design automation worked in CAD and EDA groups at large companies, where they helped their designers develop and implement new systems in silicon. Then, with the advent of the ASIC industry, the crowd split in two: engineers that worked at ASIC companies, and engineers that worked at systems design houses that used the ASIC vendors’ services. Suddenly there were two distinct audiences for DAC with different interests. System designers wanted to know what tools they could use to get their jobs done more efficiently, and ASIC vendors wanted to know new approaches and algorithms for creating those tools.

On the heels of the ASIC industry came the commercial EDA industry, and now DAC attendance divided once again – semiconductor people from ASIC suppliers, EDA engineers working on commercial design automation, and system people from large electronics companies who were customers of both the ASIC suppliers and the EDA companies. Over time, the EDA companies came to dominate the trade show, and they had to split their messaging between topics of interest to ASIC vendors and topics of interest to end users, as both groups were now customers of the EDA industry. The dividing line came at the point of tapeout, with any process that preceded tapeout falling in the “system house” category and any process that came after tapeout being addressed to the ASIC vendor community.

Just as this scenario started to settle out, another transformation trickled through the industry – the advent of fabless semiconductor companies, mega-fabs, and field-programmable technologies such as FPGAs. These new categories divided the DAC audience yet again. Furthermore, segments of the audience began to become disenfranchised as DAC struggled to chart its course through waters muddied with customer-owned tooling (COT), analog design, PCB design, deep-submicron physical verification, system-level modeling and simulation, IP-based methodologies, and free-from-the-vendor FPGA design tools. DAC grew so broad and thin that no audience (with the possible exception of hard-core ASIC designers) could get a full travel-budget worth of good information by attending. There was something there for everyone, but not enough of anything for most.

For the future, however, the DAC dilemma is even larger. At the back end of all things are the challenges faced by the relatively small (and shrinking) crowd dealing with transistor-level challenges of physical implementation with today’s 90nm, 65nm, and 45nm and smaller geometries. These customers require deep, focused, expensive technology, and they’re willing to pay for it. The weight of Moore’s law rests on their shoulders.

At the front of the design process, however, the horizons are broadening and the audience is growing. As electronic system design moves farther from the transistor and the RTL description and closer to the behavioral, software/hardware agnostic level, there will be an increasing consolidation of software and hardware design, signal processing and embedded computing, field-programmable and ASIC methodologies, and system-level and detailed-level design. Productivity demands and new design methodologies will break down the walls between these specialties, giving rise to a new generation of system designers capable of creating systems with multi-billion transistor complexity in a matter of days to weeks.

These designers will not have time to concern themselves with the specifics of RTL design, timing closure, gate-level modeling, or HDL simulation. Their focus will be on surfing the superstructure of today’s robust design automation techniques to get more capable products to market faster and cheaper. Next week, the 43 rd DAC will showcase many of the technological threads that will eventually be woven into the fabric of those new methodologies. Watch and listen. You can hear the revolution coming.

Leave a Reply

featured blogs
Sep 29, 2023
Our ultra-low-power SiWx917 Wi-Fi SoC with an integrated AI/ML accelerator simplifies Edge AI for IoT device makers. Accelerate your AIoT development....
Sep 29, 2023
Cadence has become a contributor-level member of the Automotive Working Group in the Universal Chiplet Interconnect Express (UCIe) Consortium. Last year, the Consortium ratified the UCIe specification, which was established to standardize a die-to-die interconnect for chiplet...
Sep 28, 2023
See how we set (and meet) our GHG emission reduction goals with the help of the Science Based Targets initiative (SBTi) as we expand our sustainable energy use.The post Synopsys Smart Future: Our Climate Actions to Reduce Greenhouse Gas Emissions appeared first on Chip Des...
Sep 27, 2023
On-device generative AI brings many exciting advantages, including cost, privacy, performance and personalization '“ offering significant enhancements in utility, productivity and entertainment with use cases across industries, from the commonplace to the creative....
Sep 21, 2023
Not knowing all the stuff I don't know didn't come easy. I've had to read a lot of books to get where I am....

Featured Video

Chiplet Architecture Accelerates Delivery of Industry-Leading Intel® FPGA Features and Capabilities

Sponsored by Intel

With each generation, packing millions of transistors onto shrinking dies gets more challenging. But we are continuing to change the game with advanced, targeted FPGAs for your needs. In this video, you’ll discover how Intel®’s chiplet-based approach to FPGAs delivers the latest capabilities faster than ever. Find out how we deliver on the promise of Moore’s law and push the boundaries with future innovations such as pathfinding options for chip-to-chip optical communication, exploring new ways to deliver better AI, and adopting UCIe standards in our next-generation FPGAs.

To learn more about chiplet architecture in Intel FPGA devices visit https://intel.ly/45B65Ij

featured paper

Intel's Chiplet Leadership Delivers Industry-Leading Capabilities at an Accelerated Pace

Sponsored by Intel

We're proud of our long history of rapid innovation in #FPGA development. With the help of Intel's Embedded Multi-Die Interconnect Bridge (EMIB), we’ve been able to advance our FPGAs at breakneck speed. In this blog, Intel’s Deepali Trehan charts the incredible history of our chiplet technology advancement from 2011 to today, and the many advantages of Intel's programmable logic devices, including the flexibility to combine a variety of IP from different process nodes and foundries, quicker time-to-market for new technologies and the ability to build higher-capacity semiconductors

To learn more about chiplet architecture in Intel FPGA devices visit: https://intel.ly/47JKL5h

featured chalk talk

Inductive Position Sensors for Motors and Actuators
Sponsored by Mouser Electronics and Microchip
Hall effect sensors have been quite popular for a variety of applications for many years but inductive positions sensors can provide better accuracy, better noise immunity, can cost less,  and can reject stray magnetic fields. In this episode of Chalk Talk, Amelia Dalton chats with Mark Smith from Microchip about the multitude of benefits that inductive position sensors can bring to automotive, robotic and industrial applications. They also check out the easy to use kits that can help you get started using them for your next design.
Dec 19, 2022
34,724 views