feature article
Subscribe Now

Mobile Drives Everything

The Quiet Shift in Semiconductor Conductors

Nice job folks! Over the past few decades, we electronic engineers have created social change so dramatic that previous discontinuities like the Renaissance, the Industrial Revolution, and two world wars pale in comparison. Nothing in human history can rival the technological progress that has been achieved in electronics and the impact of that progress on civilized life.

We go to work each day, month, and year – and our baseline assumption is exponential improvement. Think about that a minute. We’ve all taken math (lots of it, in this profession). Exponentials in the real world are never sustainable. Yet, we work away in our jobs expecting the number of transistors on a chip to double every two years, just as sure as we expect the sun to rise in the morning. Biennial doubling of capability is reduced to status quo.

This is an illusion, of course, and this thinking forms the basis of further illusions that can lead us down odd and sometimes incorrect paths. For example, most of us think of Intel as the biggest driving force in semiconductors. It’s been that way for decades. Intel is a big deal because computers are a big deal, and Intel makes the processors for them. Computers are a big deal because they connect everyone to the Internet.

QED

Not really. We’ve all watched the explosion of mobile devices on the scene the past few years, but few of us have walked back through our chain of assumptions to see how that affects our design work – even if we’re designing in a space that doesn’t seem related to mobile or computers at all.

Let’s walk backward through that stack. Most citizens of the modern world are connected by the Internet. Today, however, mobile devices have replaced conventional computers as the dominant means of connecting to that Internet. Literally billions of mobile computing platforms of various types are connected, and that number is growing each day. That means the economics of the semiconductor industry is now driven not by the old Intel-PC-Internet triad, but by the needs of mobile devices.

Mobile drives everything.

To understand the implications of that fact, we need to consider the requirements for mobile design that differ from the traditional drivers of semiconductor evolution. In the past, performance ruled. We made whatever compromises were required to get a few more MIPS. Cost, power, and performance were all key requirements, but performance was the king. When push came to shove, we’d compromise the other two if it would give us an edge in the big one.

Of course, we didn’t often have to compromise. Each new process node practically doubled all three of these elements, so we could easily sit back and just enjoy the ride. We were spoiled. Then, however, the Moore’s Law mountain steepened. Being spoiled, we got cranky. We didn’t want to choose between doubling our performance and halving our power – and, cost was supposed to be half, that was a given, right?

We threw our little engineering tantrums, stomped out of the room a few times, and then went back to work doing what we were actually trained to do. We made compromises. We traded off power and performance based on our application requirements, and since performance had always been king, most of our compromise went in that direction.

Then, more bad things happened. Things that had always been ignorable, sunk-cost constants began to be the main problem. For example, dynamic power used to be what we worried about. Sure, our circuit would leak a little, but real power was burned by toggling transistors. Our design-for-power methodology therefore focused on dynamic power and trying to reduce the number of transistors that toggled and the power expended on each of those transitions. Ever so quietly, our transistors began to leak more. Soon, while we weren’t looking, leakage current became the big power problem.

Likewise, silicon area had always been the big cost issue. We did everything we could to make our chips smaller, and that meant more of them could fit on a wafer, and that meant lower cost. With each new process node, however, our non-recurring-engineering (NRE) costs rose. Those costs amortized over our production volume, and before we knew it – NRE was the dominant factor in cost.

Performance was not immune to the effect, either. For decades, we got more performance by making our transistors switch faster. Megahertz gave way to Gigahertz and our designs whizzed along at amazing clock rates. Before we knew it, we had a whole different direction to look for performance – not by continuing to increase toggle rates, but by widening our designs into more parallel structures.

The combination of these three fundamental shifts coincided with mobile’s emergence as the big economic driver of new semiconductor processes. There was more cash available for optimizing mobile than for big-iron, wall-powered, fan-toting computers, so mobile got the benefit of the heavy engineering investment. Mobile devices put power at the front of the list. Battery capacity, size, and weight were key considerations in system design, so anything that could reduce power consumption was a triple win. Performance-tuned, heatsink-wielding, gigahertz-toggling chip designs gave way to clock-gated, leakage sparing, highly-parallel, unit-cost-optimized devices that would win in the mobile environment.

No matter what type of chips you buy, you are affected by this shift. Everything is mobile-biased. Even the processes used for the biggest, fastest, most expensive, most power-hungry FPGAs are fundamentally rooted in the values of mobile device design. At the 28nm process node, it didn’t matter as much. Merchant fabs like TSMC had many variants of their process technology – each targeting a different type of application. Starting with 20nm, however, the fun is over. The cost and complexity of making even a single process variant work correctly is so high that multiple variants are pretty much out of the question.

We have a clear path down to 14nm and even 10nm, but then the crystal ball gets fuzzy. If mobile is our target/reference application, it is most likely economics rather than technology that will call an end to our five-decade dance with Moore’s Law. It seems most likely that, when that breakup comes, it will be mobile applications that will be by our side watching the end of modern engineering civilization – and hopefully observing the beginning of a new one.

2 thoughts on “Mobile Drives Everything”

  1. They say still waters run deep, but beneath the seemingly steady current of Moore’s Law is a turbulent undertow of technological transition. Can you feel it? What do you think? Is is all about mobile now?

  2. NIce article Kevin and a perfect transition into 3D IC design. Yes, as we reach the atomic limit with CMOS, just below 7nm, our semiconductor industry will have made the leap to 3D and (heat willing) we’ll be able to continue to scale a silicon based microelectronic world for another 10+ years.

Leave a Reply

featured blogs
Jan 26, 2023
Are you experienced in using SVA? It's been around for a long time, and it's tempting to think there's nothing new to learn. Have you ever come across situations where SVA can't solve what appears to be a simple problem? What if you wanted to code an assertion that a signal r...
Jan 24, 2023
We explain embedded magnetoresistive random access memory (eMRAM) and its low-power SoC design applications as a non-volatile memory alternative to SRAM & Flash. The post Why Embedded MRAMs Are the Future for Advanced-Node SoCs appeared first on From Silicon To Software...
Jan 19, 2023
Are you having problems adjusting your watch strap or swapping out your watch battery? If so, I am the bearer of glad tidings....
Jan 16, 2023
By Slava Zhuchenya So your net trace has too much parasitic resistance. Where is it coming from? You ran your… ...

featured video

Synopsys 224G & 112G Ethernet PHY IP OIF Interop at ECOC 2022

Sponsored by Synopsys

This Featured Video shows four demonstrations of the Synopsys 224G and 112G Ethernet PHY IP long and medium reach performance, interoperating with third-party channels and SerDes.

Learn More

featured chalk talk

Automated Benchmark Tuning

Sponsored by Synopsys

Benchmarking is a great way to measure the performance of computing resources, but benchmark tuning can be a very complicated problem to solve. In this episode of Chalk Talk, Nozar Nozarian from Synopsys and Amelia Dalton investigate Synopsys’ Optimizer Studio that combines an evolution search algorithm with a powerful user interface that can help you quickly setup and run benchmarking experiments with much less effort and time than ever before.

Click here for more information about Synopsys Optimizer Runtime & Optimizer Studio