feature article
Subscribe Now

DAC Previsited

Dawn of the Design Tool Decade

Exactly 299 days before “Cramming More Components onto Integrated Circuits” was published in Electronics Magazine, the first workshop of the SHARE Design Automation Project was held in Atlantic City, New Jersey. The SHARE workshop had papers with titles like “A method for the best geometric placement of units on a plane” and “Design automation effects on the organization.” The magazine article began with the statement “The future of integrated electronics is the future of electronics itself.”

With such generic paper titles and such an auspicious article intro, what has transpired since then that has inexorably linked those two seemingly obscure technical publication events? Pretty much everything.

15,071 days later (this coming Monday, in fact,) the 43rd annual Design Automation Conference (DAC) will kick-off in San Francisco, California.

The SHARE design automation workshop, held in Atlantic City in June 1964, is now counted as DAC #1. The Electronics Magazine article, published less than a year after that seminal if inauspicious design automation event, contained a small section that threw down the gauntlet to the fledgling design automation industry:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.”

In other words – “Hang onto your hats, design automation dudes! We integrated circuit folks may have managed to fabricate only a 100-bit shift register with 600 transistors so far, but by the time DAC #11 rolls around in 1975, there could be almost 65,000 transistors on a chip. We don’t plan to be designing all 65,000 of them by hand, so we’re gonna need a little help from those computer programs of yours. Beyond that, don’t even think about what’s going to happen by DAC #43 in 2006. We’re talking billions!”

OK, maybe Gordon Moore’s words were a little more reserved than mine, but I’ve had the advantage of over 40 years to think about what he wrote in that Electronics Magazine article, and… you know… I think he might have been onto something there.

Mr. Moore continued, “The total cost of making a particular system function must be minimized. To do so, we could amortize the engineering over several identical items, or evolve flexible techniques for the engineering of large functions so that no disproportionate expense need be borne by a particular array. Perhaps newly devised design automation procedures could translate from logic diagram to technological realization without any special engineering.”

For the next forty years, EDA did exactly what Gordon Moore had prescribed. They created an industry around procedures that could translate from logic diagram to technological realization. That industry saw the rise and fall of countless small and medium-sized companies, each of whom contributed critical technology to the art and science of design automation. The decades of DACs chronicled the rise and fall of those enterprises, and the techniques pioneered by most of them are in some form held in state as intellectual property of one of the three or four major EDA companies operating today.

In fact, the design automation industry has been the sugar daddy of semiconductors. For four decades, they’ve provided the tools and techniques that have given meaning to Moore’s Law. Ask yourself – what good are billions of transistors on a single chip without the design automation tools that allow us to engineer products that can harness the power of all those transistors, and without verification tools that can be sure that all those transistors will work as intended when the final chips are fabricated?

Interestingly, as EDA has executed on Moore’s orders, the unsolved and interesting problem space has squeezed out both ends of the process. By the mid-1990s, the design process “from logic diagram [register transfer-level description] to technological realization [completed layout]” was reasonably mature and almost 100% automated for most digital applications. The biggest challenges today are before that process begins – generating something akin to RTL descriptions from a more compact description of a device’s desired behavior, and after that process ends, – modifying the layout to be manufacturable with today’s nanometer geometries and verifying that the whole thing is likely to work before millions are dropped on masks and NRE. Now adorned with the labels “ESL” (Electronic System Level [Design]) and “DFM” (Design For Manufacturability), the two technologies are likely to be at the forefront in this 43rd edition of the Design Automation Conference.

So, why am I, the author of last year’s infamous “Ditchin’ DAC” article, now claiming that the conference is important? I think the time has clearly come for the design automation industry to answer a new challenge. Forty-two years of getting from logic diagram to technological realization has borne its fruit. We are now quite good at that process, even for “logic diagrams” that specify billions of transistors worth of functionality. However, the problem has changed significantly, and the industry has not wavered from its previous path in order to address the new reality.

For much of the life of DAC, there was a singularity in the audience, the customer, and the presenters and exhibitors. Most of those presenting technical papers on design automation worked in CAD and EDA groups at large companies, where they helped their designers develop and implement new systems in silicon. Then, with the advent of the ASIC industry, the crowd split in two: engineers that worked at ASIC companies, and engineers that worked at systems design houses that used the ASIC vendors’ services. Suddenly there were two distinct audiences for DAC with different interests. System designers wanted to know what tools they could use to get their jobs done more efficiently, and ASIC vendors wanted to know new approaches and algorithms for creating those tools.

On the heels of the ASIC industry came the commercial EDA industry, and now DAC attendance divided once again – semiconductor people from ASIC suppliers, EDA engineers working on commercial design automation, and system people from large electronics companies who were customers of both the ASIC suppliers and the EDA companies. Over time, the EDA companies came to dominate the trade show, and they had to split their messaging between topics of interest to ASIC vendors and topics of interest to end users, as both groups were now customers of the EDA industry. The dividing line came at the point of tapeout, with any process that preceded tapeout falling in the “system house” category and any process that came after tapeout being addressed to the ASIC vendor community.

Just as this scenario started to settle out, another transformation trickled through the industry – the advent of fabless semiconductor companies, mega-fabs, and field-programmable technologies such as FPGAs. These new categories divided the DAC audience yet again. Furthermore, segments of the audience began to become disenfranchised as DAC struggled to chart its course through waters muddied with customer-owned tooling (COT), analog design, PCB design, deep-submicron physical verification, system-level modeling and simulation, IP-based methodologies, and free-from-the-vendor FPGA design tools. DAC grew so broad and thin that no audience (with the possible exception of hard-core ASIC designers) could get a full travel-budget worth of good information by attending. There was something there for everyone, but not enough of anything for most.

For the future, however, the DAC dilemma is even larger. At the back end of all things are the challenges faced by the relatively small (and shrinking) crowd dealing with transistor-level challenges of physical implementation with today’s 90nm, 65nm, and 45nm and smaller geometries. These customers require deep, focused, expensive technology, and they’re willing to pay for it. The weight of Moore’s law rests on their shoulders.

At the front of the design process, however, the horizons are broadening and the audience is growing. As electronic system design moves farther from the transistor and the RTL description and closer to the behavioral, software/hardware agnostic level, there will be an increasing consolidation of software and hardware design, signal processing and embedded computing, field-programmable and ASIC methodologies, and system-level and detailed-level design. Productivity demands and new design methodologies will break down the walls between these specialties, giving rise to a new generation of system designers capable of creating systems with multi-billion transistor complexity in a matter of days to weeks.

These designers will not have time to concern themselves with the specifics of RTL design, timing closure, gate-level modeling, or HDL simulation. Their focus will be on surfing the superstructure of today’s robust design automation techniques to get more capable products to market faster and cheaper. Next week, the 43 rd DAC will showcase many of the technological threads that will eventually be woven into the fabric of those new methodologies. Watch and listen. You can hear the revolution coming.

Leave a Reply

DAC Previsited

Dawn of the Design Tool Decade

With such generic paper titles and such an auspicious article intro, what has transpired since then that has inexorably linked those two seemingly obscure technical publication events? Pretty much everything.

15,071 days later (this coming Monday, in fact,) the 43rd annual Design Automation Conference (DAC) will kick-off in San Francisco, California.

The SHARE design automation workshop, held in Atlantic City in June 1964, is now counted as DAC #1. The Electronics Magazine article, published less than a year after that seminal if inauspicious design automation event, contained a small section that threw down the gauntlet to the fledgling design automation industry:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.”

In other words – “Hang onto your hats, design automation dudes! We integrated circuit folks may have managed to fabricate only a 100-bit shift register with 600 transistors so far, but by the time DAC #11 rolls around in 1975, there could be almost 65,000 transistors on a chip. We don’t plan to be designing all 65,000 of them by hand, so we’re gonna need a little help from those computer programs of yours. Beyond that, don’t even think about what’s going to happen by DAC #43 in 2006. We’re talking billions!”

OK, maybe Gordon Moore’s words were a little more reserved than mine, but I’ve had the advantage of over 40 years to think about what he wrote in that Electronics Magazine article, and… you know… I think he might have been onto something there.

Mr. Moore continued, “The total cost of making a particular system function must be minimized. To do so, we could amortize the engineering over several identical items, or evolve flexible techniques for the engineering of large functions so that no disproportionate expense need be borne by a particular array. Perhaps newly devised design automation procedures could translate from logic diagram to technological realization without any special engineering.”

For the next forty years, EDA did exactly what Gordon Moore had prescribed. They created an industry around procedures that could translate from logic diagram to technological realization. That industry saw the rise and fall of countless small and medium-sized companies, each of whom contributed critical technology to the art and science of design automation. The decades of DACs chronicled the rise and fall of those enterprises, and the techniques pioneered by most of them are in some form held in state as intellectual property of one of the three or four major EDA companies operating today.

In fact, the design automation industry has been the sugar daddy of semiconductors. For four decades, they’ve provided the tools and techniques that have given meaning to Moore’s Law. Ask yourself – what good are billions of transistors on a single chip without the design automation tools that allow us to engineer products that can harness the power of all those transistors, and without verification tools that can be sure that all those transistors will work as intended when the final chips are fabricated?

Interestingly, as EDA has executed on Moore’s orders, the unsolved and interesting problem space has squeezed out both ends of the process. By the mid-1990s, the design process “from logic diagram [register transfer-level description] to technological realization [completed layout]” was reasonably mature and almost 100% automated for most digital applications. The biggest challenges today are before that process begins – generating something akin to RTL descriptions from a more compact description of a device’s desired behavior, and after that process ends, – modifying the layout to be manufacturable with today’s nanometer geometries and verifying that the whole thing is likely to work before millions are dropped on masks and NRE. Now adorned with the labels “ESL” (Electronic System Level [Design]) and “DFM” (Design For Manufacturability), the two technologies are likely to be at the forefront in this 43rd edition of the Design Automation Conference.

So, why am I, the author of last year’s infamous “Ditchin’ DAC” article, now claiming that the conference is important? I think the time has clearly come for the design automation industry to answer a new challenge. Forty-two years of getting from logic diagram to technological realization has borne its fruit. We are now quite good at that process, even for “logic diagrams” that specify billions of transistors worth of functionality. However, the problem has changed significantly, and the industry has not wavered from its previous path in order to address the new reality.

For much of the life of DAC, there was a singularity in the audience, the customer, and the presenters and exhibitors. Most of those presenting technical papers on design automation worked in CAD and EDA groups at large companies, where they helped their designers develop and implement new systems in silicon. Then, with the advent of the ASIC industry, the crowd split in two: engineers that worked at ASIC companies, and engineers that worked at systems design houses that used the ASIC vendors’ services. Suddenly there were two distinct audiences for DAC with different interests. System designers wanted to know what tools they could use to get their jobs done more efficiently, and ASIC vendors wanted to know new approaches and algorithms for creating those tools.

On the heels of the ASIC industry came the commercial EDA industry, and now DAC attendance divided once again – semiconductor people from ASIC suppliers, EDA engineers working on commercial design automation, and system people from large electronics companies who were customers of both the ASIC suppliers and the EDA companies. Over time, the EDA companies came to dominate the trade show, and they had to split their messaging between topics of interest to ASIC vendors and topics of interest to end users, as both groups were now customers of the EDA industry. The dividing line came at the point of tapeout, with any process that preceded tapeout falling in the “system house” category and any process that came after tapeout being addressed to the ASIC vendor community.

Just as this scenario started to settle out, another transformation trickled through the industry – the advent of fabless semiconductor companies, mega-fabs, and field-programmable technologies such as FPGAs. These new categories divided the DAC audience yet again. Furthermore, segments of the audience began to become disenfranchised as DAC struggled to chart its course through waters muddied with customer-owned tooling (COT), analog design, PCB design, deep-submicron physical verification, system-level modeling and simulation, IP-based methodologies, and free-from-the-vendor FPGA design tools. DAC grew so broad and thin that no audience (with the possible exception of hard-core ASIC designers) could get a full travel-budget worth of good information by attending. There was something there for everyone, but not enough of anything for most.

For the future, however, the DAC dilemma is even larger. At the back end of all things are the challenges faced by the relatively small (and shrinking) crowd dealing with transistor-level challenges of physical implementation with today’s 90nm, 65nm, and 45nm and smaller geometries. These customers require deep, focused, expensive technology, and they’re willing to pay for it. The weight of Moore’s law rests on their shoulders.

At the front of the design process, however, the horizons are broadening and the audience is growing. As electronic system design moves farther from the transistor and the RTL description and closer to the behavioral, software/hardware agnostic level, there will be an increasing consolidation of software and hardware design, signal processing and embedded computing, field-programmable and ASIC methodologies, and system-level and detailed-level design. Productivity demands and new design methodologies will break down the walls between these specialties, giving rise to a new generation of system designers capable of creating systems with multi-billion transistor complexity in a matter of days to weeks.

These designers will not have time to concern themselves with the specifics of RTL design, timing closure, gate-level modeling, or HDL simulation. Their focus will be on surfing the superstructure of today’s robust design automation techniques to get more capable products to market faster and cheaper. Next week, the 43 rd DAC will showcase many of the technological threads that will eventually be woven into the fabric of those new methodologies. Watch and listen. You can hear the revolution coming.

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Reliable Connections for Rugged Handling
Sponsored by Mouser Electronics and Amphenol
Materials handling is a growing market for electronic designs. In this episode of Chalk Talk, Amelia Dalton and Jordan Grupe from Amphenol Industrial explore the variety of connectivity solutions that Amphenol Industrial offers for materials handling designs. They also examine the DIN charging solutions that Amphenol Industrial offers and the specific applications where these connectors can be a great fit.
Dec 5, 2023
18,813 views