feature article
Subscribe Now

The Tall Thin Engineer

Standing on the Shoulders of Giants

Engineering is one of the very few professions that constantly re-engineer themselves. By doing our work well, we change forever the nature of the work remaining to be done. Once something has been designed (contrary to the apparent opinions of those with chronic NIH syndrome who insist on perpetually re-designing the wheel), it is designed, and it should not really need to be designed again.

Most engineering school curricula start us at a level of “bare metal.” We first study the basic underlying sciences – physics and chemistry, and the mathematics required to make it all work. The educational philosophy seems to be that we should have a conceptual grasp of the bare metal layer – electrons skipping happily down conductive pathways, frolicking playfully across N- and P- regions, and delivering their cumulative punch right where we need it. From that basic level, all of our understanding of electronics evolves, through ever-higher levels of abstraction, until we reach the point (today) where we are docking a streaming-video-processing module onto an ARM-based processor subsystem and pressing the big green button – telling our laptop supercomputer to implement the whole thing for us on an FPGA development board.

Engineering is cumulative over time. The only reason we can willy-nilly grab giant chunks of technology as impressive and enormous as streaming-video modules and ARM-based processing subsystems and command them to perform together is that other engineers before us have worked out the excruciating details at every level of that process. They designed those things so that we do not have to.

As a result, we spend our days at a different level of abstraction than our predecessors. They learned how to make MOSFETs more efficient than BJTs. They did the math to get complementary and symmetrical pairs of n-type and p-type MOSFETs to implement logic functions. They worried about how to make a flip-flop out of cross-connected NAND gates. They designed Wallace-tree multipliers. They invented the LUT. They envisioned hardware description languages. They created synthesis and place-and-route software. They tuned and evolved the modern processor architecture. They toiled over standards and conventions to make their work re-usable.

On top of all this masterful and detailed work, our work begins.

We are the tall, thin engineers.

Of course, not all engineers today are tall and thin. Each level of abstraction in our miraculous hierarchy is constantly being reworked and re-imagined. Back in the semiconductor fabs, brilliant minds spend their entire careers optimizing specific aspects of lithography, of etching, of wafer handling, or on other esoteric topics so specialized and detailed that their mothers haven’t the faintest clue as to the nature of their progeny’s genius. Each of their achievements has ramifications that, in turn, rattle the upstream levels from below – tectonic plates of technology shifting beneath our precariously perched cities of engineering assumptions. One day, dynamic power is the only thing. The next day, leakage current steals so much from our design that it becomes the dominant factor. From the folks that design the flip-flops up to the guys working on new versions of asymmetric multi-processing architectures, everybody has to leap one assumption to the left. Electronic technology is a constantly growing pyramid whose base is ever on the move.

For the purpose of educating engineers, this presents a challenge. The distance from the bottom of this tower of knowledge to the top grows ever larger. We cannot possibly infuse detailed understanding into students that will reach from device physics to multi-processor architectures and software theory. Instead, the best we can do is to show them an overview of the universe, teach them to learn, infect them with a passion for problem solving, and turn them loose into the world.

There have always been tall, thin engineers – people who work the high-steel of engineering, depending with blind faith on the scaffoldings and structures assembled below them, and fearlessly assaulting the sky with ever-higher ambitions. But, with each passing generation of our profession, the number of stories in that building and the number of separate disciplines required to keep the whole thing standing grows larger. And, like a house of cards, we are vulnerable if any level of that structure ever begins to fail. If we ever produce a generation of engineers with no device physics experts, the whole foundation of mankind’s modern technological achievements will implode in upon itself. Modern electronic engineering would be sucked into a great black hole.

We are also in an era where disciplines are subsumed by other disciplines. Analog design has largely been morphed into the digital domain. Instead of working out the hard math of our predecessors, most of us just get analog information into our digital world as quickly as possible so we can work in the relative comfort and predictability of our binary-based reasoning. Even problems that are honestly better and more easily solved in the analog world are captured, converted, computed, and re-converted – trading the elegance and simplicity of the original solution for the brute force dogma of digital dominance.

Now hardware itself is falling victim to this domain-morphing transformation. Instead of designing elegant, application-specific hardware – optimized for a particular task, we slap down a few billion free transistors to construct our go-to circuit – a microprocessor. Then, all our problem solving can take place in the safety of software. Why go pouring concrete on a bunch of specialized digital circuitry when the light touch of a few lines of code can give you the functionality you want – and can allow you to easily change your mind later. Someday soon – if they haven’t already, all hardware engineers will join the ranks of those at the lowest levels of the technology pyramid – slaving away for entire careers on obscure problems that the tall thin engineer will never understand – and never need to.

For the tall thin engineer of the future is almost certainly a software engineer. Sure, he’ll scrap together a few major components of incredibly sophisticated hardware – just to give his software the input and control it needs to run and to interact with the physical world. He’ll start with a basic computing system, stick on some sensors, some miscellaneous human interface components, some connectivity, and some storage perhaps. But, he’ll attach those things with no more thought than your teenager has in plugging USB components into his laptop. They’ll all just be plug-and-play modules that enable the real work to be done in software. 

Whether the underlying hardware technology will be FPGAs, some other type of SoC, or some yet-to-be-popularized architecture is hardly relevant. It won’t matter if the underlying atoms are CMOS transistors or carbon nanotubes, and nobody will know or care whether they were done with optical lithography and quadruple patterning, EUV, or some other technique. The real energy then will be in the software – and even that will no longer be painstakingly coded with line-by-line instructions. Software too is climbing the abstraction tree.

Buckle up and enjoy the ride.

6 thoughts on “The Tall Thin Engineer”

  1. I think this is whistling in the dark.

    It is possible to see the whole structure, from the N & P tubs of the transistors through the analog and digital to the CPU to the software (OS and applications) to the result. Senior engineers have done this. It helps to see each layer built. It shows you that each layer is understandable: simple for complicated reasons and obvious in the past tense.

    We have been bewitched by Moore’s Law, where the ship shrinks by 50% every 18 months, getting faster and using less power as well. Silicon vendors had to double their chip complexities (double the number of transistors on the chip) every 18 months, or their revenue will drop by half with the new shrink.

    Software vendors are also sucked in. If the memory doubles and the CPU speed increases by 30% every 18 months, the software had better expand in features to fill the memory and time gap. Or someone else will. And the new features in both cases have to be compelling enough to be bought.

    Soon, continuous scheduling of invention was too much. As the systems got bigger, you did not have time to fully understand what was added. You had to hope that what was underneath was OK as is, and/or that you could bug-fix it in the field if it was not.

    Then came 2004, when Moore’s Law went into the ICU. At 90 nanometers, shrinking no longer worked as before. The chips got smaller, but they got *slower* and *hotter*. Example: I had a 3.5 GHz Pentium 4 in 2004. To this day, there has not been a 4 GHz Pentium.

    And multi-core has not helped very much. Two slow processors does not equal one fast processor, in the general case. Aside from the fact that we have not been very successful at writing software for them.

    Implication: adding new software on top of old may slow everything down, if you are not very careful. Oops. We now have to take time to be very careful. Do more with less. Austerity.

    The economic drive to add features still holds in silicon. But since we cannot speed up the CPU anymore, the old crank-up-the-clock and make a bigger cache approach no longer works. Now, we have to design new, big hardware to fill up the chip. Oops. Now we have to schedule hardware inventions.

    But maybe there is a way of passing the buck. Instead of making ever bigger memories for a shrinking market, we can make ever bigger FPGAs, a kind of “memory for design.” Let the system designers do the inventing.

    It will be interesting to watch.

  2. I really agree with Kevin. Sometimes I wonder whether we actually do engineering when we compare the work we do with that of our predecessors. We may not be able to “Buckle up and enjoy the ride” sometime into the future.

  3. Kevin,

    I share the exact sentiments as you. Your words capture the true essence and elegance of engineering. Sadly not many people care about the engineering and this includes the engineers themselves.. 🙁


  4. Dwyland,
    You got me. I was totally whistling in the dark. That tune has been stuck in my head for years!

    I agree with what you’re saying. I don’t think Moore’s Law was really a free ride, though. There’s a lot of amazing work required to hit each new process node. The stuff that has to happen for us to reach 10 and 7 really makes me wonder – triple or quadruple patterning, EUV, there are some scary technologies involved in making that work. One has to wonder when it will stop making economic sense to keep going there. Someday, the semiconductor folks may build a new node, and nobody will come.

  5. At DATE in Grenoble(report coming soon) there was a panel session about the importance of tall thin chip architects. At question time, a very tall and quite thin man stood up and thanked the panel for backing up what he had been telling people for many years.

    Sorry – didn’t catch his name

Leave a Reply

featured blogs
Dec 6, 2023
Optimizing a silicon chip at the system level is crucial in achieving peak performance, efficiency, and system reliability. As Moore's Law faces diminishing returns, simply transitioning to the latest process node no longer guarantees substantial power, performance, or c...
Dec 6, 2023
Explore standards development and functional safety requirements with Jyotika Athavale, IEEE senior member and Senior Director of Silicon Lifecycle Management.The post Q&A With Jyotika Athavale, IEEE Champion, on Advancing Standards Development Worldwide appeared first ...
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

Must be Thin to Fit: µModule Regulators
In this episode of Chalk Talk, Amelia Dalton and Younes Salami from Analog Devices explore the benefits and restrictions of Analog Devices µModule regulators. They examine how these µModule regulators can declutter PCB area and increase the system performance of your next design, and the variety of options that Analog Devices offers within their Ultrathin µModule® regulator product portfolio.
Dec 5, 2023