feature article
Subscribe Now

In the Shadow of Moore’s Law

Lithography Pulls Progress in its Wake

For the last half century, a large segment of the world’s electronic engineers have worked in the insane vortex of Moore’s Law. Entire engineering careers from college through retirement have been spent with the only constant being exponential change. Whatever you learned in engineering school was obsolete before your first job promotion. Whatever you managed to design one year, you had to double to stay competitive two years later. Just about every tool, technique, or technology you invested time in learning had lost relevance within a couple of years, and your professional life was a nonstop exercise in constant re-education.

But Moore’s Law was all about lithography – trying to cram as many transistors as possible onto a slice of silicon. Applications in the direct path of digital design were dragged by their hearts through the Moore’s Law vortex. But many peripheral technologies didn’t experience Moore’s Law directly. Instead, they evolved much more slowly, rocking in the wake of the ongoing exponential change in digital electronics. That evolution is interesting to study because now, with Moore’s Law slowing to a close, previously pedestrian areas of engineering are poised to take center stage, picking up the slack where Moore left off.

Consider, for example, high current components. The physics of the amount of copper required to conduct a given quantity of coulombs hasn’t changed over the years, and consequently advanced lithography has little value. But progress in fields like materials over the years has given us new transistors with significant new advantages. Great strides have been made in wide-bandgap semiconductors, giving us better power transistors and new ways of making LEDs. Combined with today’s digital elements, they enable completely new classes of applications.

The analog and RF worlds haven’t ridden the crest of Moore’s Law either, but their domains have been profoundly affected by the technology that has. Analog functions have been wholesale converted to digital over the past few decades, and the once-proud art of analog design has, in many cases, been reduced to building the best possible ADCs and DACs so that the digital whiz kids could take over the heavy lifting. Advances in MIMO, beamforming and the DSP technologies that drive those have revolutionized antenna design and performance, and those changes can be directly traced to the exponential improvement in digital semiconductor lithography.

MEMS may seem like an area that would benefit directly from the miracles of lithography advancement, but, in practice, there is little reward for packing more sensors on a chunk of silicon. While its easy to come up with ways to use a million (or a billion) transistors instead of a thousand, it’s hard to think of an application that needs more than a dozen or so gyros or accelerometers. So, while sensors play a critical role in the enablement of the IoT (which is built heavily on the bounty of Moore’s Law), there is not going to be a sustained exponential improvement in those technologies over the span of years or decades as we have seen in semiconductors.

Software engineering has an almost inverse relationship with Moore’s Law. Advances in semiconductors have delivered vast improvements in computational performance, memory and storage capacity, and network bandwidth. Our human ability to conceptualize and write efficient, bug-free code has not come remotely close to keeping up. In order to develop the unbelievably complex applications that take advantage of today’s computing technology, software engineering has had to adopt a “quantity over quality” approach. Sure, you can write much faster and more efficient code in assembly than you can with a modern high-level language compiler, but the ~1000x productivity advantage you get from using modern software development, debug, and verification tools comes in more than a little bit handy in getting your job finished before the next ice age.

The software problem is an enormous one. There is no miraculous lithography engine making software engineers twice as productive or intelligent every two years, so they can continue to develop ever more complex applications in the same amount of time. The industry has adapted by modest improvements in software development tools and languages, adding vastly more manpower to the software engineering profession, and building impressive amounts of reusable software IP stacks. This has permitted an increasingly large army of software engineers to develop the most complex pieces of engineering ever conceived by humans. You may think the latest Xeon processor is a complicated piece of silicon, but it’s trivial compared with the complexity of the programs it will execute. An iPhone may seem like a daunting chunk of hardware technology, but it is dwarfed by the complexity of the software and app ecosystem it drives.

In an interesting feedback loop, one segment of the software engineering industry – electronic design automation (EDA) – has created extraordinarily complex software specifically for the purpose of driving Moore’s Law forward. EDA has constantly pushed the boundaries of software engineering and computational power in order to facilitate the design of the next shrink in transistor process. The frantic two-year cycle of Moore’s Law meant that EDA had to re-invent itself almost entirely every two years. The placement and routing algorithms that could deal with 10,000 gates back in the early 1980s had n-squared algorithms that could run for days to finish one design. Throw many millions of gates at those methods and algorithms from one of today’s chips, and no number of modern servers would ever finish the problem.

Because of this constant crushing pressure to double performance, capacity, and capability of EDA tools every two years, the EDA industry had little time to fuss over niceties like fancy user interfaces or glitzy cross-tool interfaces. And, as the challenges of chip design became more formidable, previously separate silos such as synthesis and place-and-route had to be combined. Now, with Moore’s Law slowing down, the pressure on EDA to run at full speed in order to simply remain in the same place may ease, and we seem to be entering an era where EDA tools can catch up to the rest of the software industry in their underlying architecture and usability. If EDA succeeds, they may make chip design itself more accessible and less risky and expensive, which could pave the way for a much wider gamut of new applications to take advantage of what Moore’s Law has brought us. It will be interesting to watch.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

ROHM Automotive Intelligent Power Device (IPD)
Modern automotive applications require a variety of circuit protections and functions to safeguard against short circuit conditions. In this episode of Chalk Talk, Amelia Dalton and Nick Ikuta from ROHM Semiconductor investigate the details of ROHM’s Automotive Intelligent Power Device, the role that ??adjustable OCP circuit and adjustable OCP mask time plays in this solution, and the benefits that ROHM’s Automotive Intelligent Power Device can bring to your next design.
Feb 1, 2024
23,751 views