feature article
Subscribe Now

The Hardware Vanishing Point

Someday, Will it All be Software?

The disciplines of hardware and software engineering have always been intertwined and symbiotic – like the yin and yang of some bizarre abstract beast. Software cannot exist without hardware to execute it, of course, and most hardware today is designed in the service of software. The vast majority of systems being designed today involve a mix of both elements working together, with software steadily inheriting more and more of the complexity load.

Let’s think about that for a minute. 

On the one hand, we have digital hardware technology that has rocketed up the Moore’s Law curve for five solid decades, exploding in complexity like nothing ever seen by humans. One might expect, based on that fact alone, that hardware would bear the brunt of system complexity. After all, we have gone from tens of transistors on a chip to billions, and from scant kilobytes of memory and storage to terabytes. A system-on-chip today may have billions of transistors combined to make a device a few millimeters on a side that would have filled large buildings a few decades ago. Multiple processors, peripherals, programmable logic fabric, multi-gigabit IOs, and massive amounts of memory can all be crammed onto a single silicon die. 

But, on the other hand, if we look at the actual systems being designed, it is software that accounts for most of the complexity. In fact, the largest software systems today are thousands of times more complex than the most complicated hardware ever designed. So, while we’ve been able to measure the exponential rise in hardware complexity, software has been quietly exploding at something like a Moore’s Law squared rate for just about the same timeframe.

While the battle for the Moore’s Law explosion in hardware complexity has been largely fought at the process level, finding ways to do lithography at ever-finer resolution, there has been a corresponding challenge to keep engineering processes up with the pace. Schematic methods that were valid for dozens to hundreds of components had to be replaced by register-transfer-level abstractions for designs with thousands to millions of components, and still higher levels of abstraction must be applied as the component count in a typical design rockets toward the billions. This challenge of complexity management has been bounded by the rate of expansion of designs. As hardware engineers, we always need a methodology capable of creating circuits as complex as our fabs can build. 

But, on the software side, there is only complexity. In fact, management of complexity is the only thing that truly limits what we can do in software. In response to that challenge, software engineering has built an impressive array of processes, tools, and techniques for breaking down incredibly complex problems into chunks that humans can comprehend. Programming languages have continued to climb in abstraction, libraries of pre-engineered functions have become vast, and processes for managing massive collaboration across a large number of engineers have been deployed. Many of these software engineering techniques have also found their way into the hardware discipline, as hardware engineers have encountered problems complicated enough to require them.

Now there is a lot of talk about Moore’s Law grinding to a close. We know that the next few process nodes – 14nm, 10nm, 7nm – are probably all feasible to manufacture, but the incremental return-on-investment shrinks dramatically for each successive step. It is very unlikely that a node beyond 7nm will be economically feasible in anything like the Moore’s Law 2-year clock period.

But the hardware we already have will be capable of sustaining a continued dramatic explosion in the complexity of software for a long time to come.

If we extrapolate those trend lines into the future, we reach an interesting vanishing point. Software becomes completely dominant. The craft of hardware engineering becomes a tiny fraction of the enormous code juggernaut. The vast majority of the people creating electronic systems and technologies will be software engineers of some sort. Future archeologists may unearth hardware engineer fossils and wonder what strange beasts must have inhabited these modest bones.

As hardware becomes infinitely powerful and infinitessimally cheap, the value contribution must shift. Revenues cannot continue to be measured in shipments of units of physical objects. Manufacturing itself will become a commodity. Value returned will be almost exclusively from software and other intellectual property, and that’s an economic reality that our society has not yet demonstrated the maturity to manage. 

Consider the lessons of the smartphone economy. A single hardware platform is sold at almost zero margin in enormous quantities. The value of the device is derived mostly from software apps, but the economy of the smartphone app is hyper-deflationary. “What?!? That application only monitors and manages all my fitness activities and financial transactions, walks the dog, organizes my music, and keeps my kids out of danger? – no WAY that’s worth 99 cents!”

While this may sound like doom and gloom for hardware engineers, we should not despair. There is plenty of work for the current generation of multimeter-toting kids, and probably for another round or two after that. And, if you haven’t noticed, more and more of the craft of hardware engineering is already actually software engineering. It’s been insidiously taking us over for years already. That’s right – the call is coming from INSIDE THE HOUSE! 

Notice the makeup of most system design teams today. We hear that the ratio of software engineers to hardware engineers is already 5:1 to 10:1, and the trend seems to be for those ratios to only increase. So, hardware folks, wander over to the next row of cubicles sometime and see what those people are up to. Chances are your job will look more and more like theirs as time goes on.

2 thoughts on “The Hardware Vanishing Point”

  1. I don’t think it’s anywhere near that gloomy for Moore’s law and new innovation.

    Consider that most of the devices we have today, where not imagined 50 years ago in the form and utility they have taken on. Sure Dick Tracy had a watch phone, but few actually imagined that more computing power than existed on the planet in 1961, would exist on the watch-pads that are likely to become a new norm in the next decade … possibly fueled by the UI merging into something like Google Glass.

    Sure we have beaten half of the computing performance bottlenecks, probably into near death with GPU’s and mult-cores in everything … that was the easy low hanging fruit. But the other half of Amdahl’s law, is that sure you can speed up the parallel side of the problem … but the serial problem just becomes the next bottleneck.

    That is where the hard slow large critical serial algorithms are compiled in logic (FPGA’s initially, ASIC or hard cores later) with highly distributed memory will enable new applications and devices that are currently just barely feasible due to the huge computational delays. This is most likely the next five to seven cycles of Moore’s law playing out … moving the algorithms to gates in even larger 3D low power dies.

    There are lot’s of things computationally infeasible tasks today … that are probably just a hard coded logic ASIC that completely abandons everything we think of today as computing, because we are stuck with a legacy anchor of general purpose computing.

    It’s why serial critical path computing compiled into FPGA fabrics and ASIC’s is the next enabling technology to chip away at Amdahl’s law. I still vote for Intel rolling their own reconfigurable logic fabric tightly integrated into servers, desktops, mobile and embedded products.

    But maybe AMD, MIPS, ARM, and the other guys will innovate and get their act together first.

  2. It will never be “all software.”

    You can talk all you want and code all you want, but hardware is a necessity if you want to DO anything!

    Since hardware is a rock-bottom necessity, I am heading the movement back to hardware. In Smart Small Systems, why use a microprocessor and software when a bit of more sophisticated hardware can do the trick?

    I have developeded an alternative technology that is mostly hardware. See “Natural Logic of Space and Time” (NL), available on Amazon.
    http://www.amazon.com/Natural-Logic-Space-Time-Real-time/dp/1508580359/

Leave a Reply

featured blogs
Mar 3, 2021
In grade school, we had timed math quizzes. With a sheet full of problems and the timer set, the goal was to answer as many as possible. The key to speed is TONS of practice and, honestly, memorization '€“ knowing the problems so well that the answer comes to mind at first ...
Mar 3, 2021
The recent 34th International Conference on VLSI Design , also known as VLSID , was a virtual event, of course. But it is India-based and the conference ran on India time. The theme for this year was... [[ Click on the title to access the full blog on the Cadence Community s...
Feb 26, 2021
OMG! Three 32-bit processor cores each running at 300 MHz, each with its own floating-point unit (FPU), and each with more memory than you than throw a stick at!...
Feb 25, 2021
Learn how ASIL-certified EDA tools help automotive designers create safe, secure, and reliable Advanced Driver Assistance Systems (ADAS) for smart vehicles. The post Upping the Safety Game Plan for Automotive SoCs appeared first on From Silicon To Software....

featured video

Silicon-Proven Automotive-Grade DesignWare IP

Sponsored by Synopsys

Get the latest on Synopsys' automotive IP portfolio supporting ISO 26262 functional safety, reliability, and quality management standards, with an available architecture for SoC development and safety management.

Click here for more information

featured paper

Lighting the way with DLP® automotive headlights

Sponsored by Texas Instruments

DLP automotive technology can be used to enable adaptive driving beam (ADB) systems with over 1.3 million pixels to project more light on the road.

Click here to download the whitepaper

Featured Chalk Talk

Next Generation Connectivity and Control Concepts for Industry 4.0

Sponsored by Mouser Electronics and Molex

Industry 4.0 promises major improvements in terms of efficiency, reduced downtime, automation, monitoring, and control. But Industry 4.0 also demands a new look at our interconnect solutions. In this episode of Chalk Talk, Amelia Dalton chats with Mark Schuerman of Molex about Industry 4.0 and how to choose the right connectors for your application.

Click here for more information about Molex Industry 4.0 Solutions