feature article
Subscribe Now

The Tall Thin Engineer

Standing on the Shoulders of Giants

Engineering is one of the very few professions that constantly re-engineer themselves. By doing our work well, we change forever the nature of the work remaining to be done. Once something has been designed (contrary to the apparent opinions of those with chronic NIH syndrome who insist on perpetually re-designing the wheel), it is designed, and it should not really need to be designed again.

Most engineering school curricula start us at a level of “bare metal.” We first study the basic underlying sciences – physics and chemistry, and the mathematics required to make it all work. The educational philosophy seems to be that we should have a conceptual grasp of the bare metal layer – electrons skipping happily down conductive pathways, frolicking playfully across N- and P- regions, and delivering their cumulative punch right where we need it. From that basic level, all of our understanding of electronics evolves, through ever-higher levels of abstraction, until we reach the point (today) where we are docking a streaming-video-processing module onto an ARM-based processor subsystem and pressing the big green button – telling our laptop supercomputer to implement the whole thing for us on an FPGA development board.

Engineering is cumulative over time. The only reason we can willy-nilly grab giant chunks of technology as impressive and enormous as streaming-video modules and ARM-based processing subsystems and command them to perform together is that other engineers before us have worked out the excruciating details at every level of that process. They designed those things so that we do not have to.

As a result, we spend our days at a different level of abstraction than our predecessors. They learned how to make MOSFETs more efficient than BJTs. They did the math to get complementary and symmetrical pairs of n-type and p-type MOSFETs to implement logic functions. They worried about how to make a flip-flop out of cross-connected NAND gates. They designed Wallace-tree multipliers. They invented the LUT. They envisioned hardware description languages. They created synthesis and place-and-route software. They tuned and evolved the modern processor architecture. They toiled over standards and conventions to make their work re-usable.

On top of all this masterful and detailed work, our work begins.

We are the tall, thin engineers.

Of course, not all engineers today are tall and thin. Each level of abstraction in our miraculous hierarchy is constantly being reworked and re-imagined. Back in the semiconductor fabs, brilliant minds spend their entire careers optimizing specific aspects of lithography, of etching, of wafer handling, or on other esoteric topics so specialized and detailed that their mothers haven’t the faintest clue as to the nature of their progeny’s genius. Each of their achievements has ramifications that, in turn, rattle the upstream levels from below – tectonic plates of technology shifting beneath our precariously perched cities of engineering assumptions. One day, dynamic power is the only thing. The next day, leakage current steals so much from our design that it becomes the dominant factor. From the folks that design the flip-flops up to the guys working on new versions of asymmetric multi-processing architectures, everybody has to leap one assumption to the left. Electronic technology is a constantly growing pyramid whose base is ever on the move.

For the purpose of educating engineers, this presents a challenge. The distance from the bottom of this tower of knowledge to the top grows ever larger. We cannot possibly infuse detailed understanding into students that will reach from device physics to multi-processor architectures and software theory. Instead, the best we can do is to show them an overview of the universe, teach them to learn, infect them with a passion for problem solving, and turn them loose into the world.

There have always been tall, thin engineers – people who work the high-steel of engineering, depending with blind faith on the scaffoldings and structures assembled below them, and fearlessly assaulting the sky with ever-higher ambitions. But, with each passing generation of our profession, the number of stories in that building and the number of separate disciplines required to keep the whole thing standing grows larger. And, like a house of cards, we are vulnerable if any level of that structure ever begins to fail. If we ever produce a generation of engineers with no device physics experts, the whole foundation of mankind’s modern technological achievements will implode in upon itself. Modern electronic engineering would be sucked into a great black hole.

We are also in an era where disciplines are subsumed by other disciplines. Analog design has largely been morphed into the digital domain. Instead of working out the hard math of our predecessors, most of us just get analog information into our digital world as quickly as possible so we can work in the relative comfort and predictability of our binary-based reasoning. Even problems that are honestly better and more easily solved in the analog world are captured, converted, computed, and re-converted – trading the elegance and simplicity of the original solution for the brute force dogma of digital dominance.

Now hardware itself is falling victim to this domain-morphing transformation. Instead of designing elegant, application-specific hardware – optimized for a particular task, we slap down a few billion free transistors to construct our go-to circuit – a microprocessor. Then, all our problem solving can take place in the safety of software. Why go pouring concrete on a bunch of specialized digital circuitry when the light touch of a few lines of code can give you the functionality you want – and can allow you to easily change your mind later. Someday soon – if they haven’t already, all hardware engineers will join the ranks of those at the lowest levels of the technology pyramid – slaving away for entire careers on obscure problems that the tall thin engineer will never understand – and never need to.

For the tall thin engineer of the future is almost certainly a software engineer. Sure, he’ll scrap together a few major components of incredibly sophisticated hardware – just to give his software the input and control it needs to run and to interact with the physical world. He’ll start with a basic computing system, stick on some sensors, some miscellaneous human interface components, some connectivity, and some storage perhaps. But, he’ll attach those things with no more thought than your teenager has in plugging USB components into his laptop. They’ll all just be plug-and-play modules that enable the real work to be done in software. 

Whether the underlying hardware technology will be FPGAs, some other type of SoC, or some yet-to-be-popularized architecture is hardly relevant. It won’t matter if the underlying atoms are CMOS transistors or carbon nanotubes, and nobody will know or care whether they were done with optical lithography and quadruple patterning, EUV, or some other technique. The real energy then will be in the software – and even that will no longer be painstakingly coded with line-by-line instructions. Software too is climbing the abstraction tree.

Buckle up and enjoy the ride.

6 thoughts on “The Tall Thin Engineer”

  1. I think this is whistling in the dark.

    It is possible to see the whole structure, from the N & P tubs of the transistors through the analog and digital to the CPU to the software (OS and applications) to the result. Senior engineers have done this. It helps to see each layer built. It shows you that each layer is understandable: simple for complicated reasons and obvious in the past tense.

    We have been bewitched by Moore’s Law, where the ship shrinks by 50% every 18 months, getting faster and using less power as well. Silicon vendors had to double their chip complexities (double the number of transistors on the chip) every 18 months, or their revenue will drop by half with the new shrink.

    Software vendors are also sucked in. If the memory doubles and the CPU speed increases by 30% every 18 months, the software had better expand in features to fill the memory and time gap. Or someone else will. And the new features in both cases have to be compelling enough to be bought.

    Soon, continuous scheduling of invention was too much. As the systems got bigger, you did not have time to fully understand what was added. You had to hope that what was underneath was OK as is, and/or that you could bug-fix it in the field if it was not.

    Then came 2004, when Moore’s Law went into the ICU. At 90 nanometers, shrinking no longer worked as before. The chips got smaller, but they got *slower* and *hotter*. Example: I had a 3.5 GHz Pentium 4 in 2004. To this day, there has not been a 4 GHz Pentium.

    And multi-core has not helped very much. Two slow processors does not equal one fast processor, in the general case. Aside from the fact that we have not been very successful at writing software for them.

    Implication: adding new software on top of old may slow everything down, if you are not very careful. Oops. We now have to take time to be very careful. Do more with less. Austerity.

    The economic drive to add features still holds in silicon. But since we cannot speed up the CPU anymore, the old crank-up-the-clock and make a bigger cache approach no longer works. Now, we have to design new, big hardware to fill up the chip. Oops. Now we have to schedule hardware inventions.

    But maybe there is a way of passing the buck. Instead of making ever bigger memories for a shrinking market, we can make ever bigger FPGAs, a kind of “memory for design.” Let the system designers do the inventing.

    It will be interesting to watch.

  2. I really agree with Kevin. Sometimes I wonder whether we actually do engineering when we compare the work we do with that of our predecessors. We may not be able to “Buckle up and enjoy the ride” sometime into the future.

  3. Kevin,

    I share the exact sentiments as you. Your words capture the true essence and elegance of engineering. Sadly not many people care about the engineering and this includes the engineers themselves.. 🙁

    -Gautam.

  4. Dwyland,
    You got me. I was totally whistling in the dark. That tune has been stuck in my head for years!

    I agree with what you’re saying. I don’t think Moore’s Law was really a free ride, though. There’s a lot of amazing work required to hit each new process node. The stuff that has to happen for us to reach 10 and 7 really makes me wonder – triple or quadruple patterning, EUV, there are some scary technologies involved in making that work. One has to wonder when it will stop making economic sense to keep going there. Someday, the semiconductor folks may build a new node, and nobody will come.

  5. At DATE in Grenoble(report coming soon) there was a panel session about the importance of tall thin chip architects. At question time, a very tall and quite thin man stood up and thanked the panel for backing up what he had been telling people for many years.

    Sorry – didn’t catch his name

Leave a Reply

featured blogs
Nov 25, 2020
It constantly amazes me how there are always multiple ways of doing things. The problem is that sometimes it'€™s hard to decide which option is best....
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...
Nov 25, 2020
It might seem simple, but database units and accuracy directly relate to the artwork generated, and it is possible to misunderstand the artwork format as it relates to the board setup. Thirty years... [[ Click on the title to access the full blog on the Cadence Community sit...
Nov 23, 2020
Readers of the Samtec blog know we are always talking about next-gen speed. Current channels rates are running at 56 Gbps PAM4. However, system designers are starting to look at 112 Gbps PAM4 data rates. Intuition would say that bleeding edge data rates like 112 Gbps PAM4 onl...

Featured video

Synopsys and Intel Full System PCIe 5.0 Interoperability Success

Sponsored by Synopsys

This video demonstrates industry's first successful system-level PCI Express (PCIe) 5.0 interoperability between the Synopsys DesignWare Controller and PHY IP for PCIe 5.0 and Intel Xeon Scalable processor (codename Sapphire Rapids). The ecosystem can use the companies' proven solutions to accelerate development of their PCIe 5.0-based products in high-performance computing and AI applications.

More information about DesignWare IP Solutions for PCI Express

featured paper

Streamlining functional safety certification in automotive and industrial

Sponsored by Texas Instruments

Functional safety design takes rigor, documentation and time to get it right. Whether you’re designing for the factory floor or cars on the highway, this white paper explains how TI is making it easier for you to find and use its integrated circuits (ICs) in your functional safety designs.

Click here to download the whitepaper

Featured Chalk Talk

Amplifiers & Comparators Designed for Low Power, Precision

Sponsored by Mouser Electronics and ON Semiconductor

When choosing amplifiers and comparators for low-power, high-precision applications, it pays to have a broad understanding of the latest technology in op amps. There are new types of devices with significant advantages over the traditional go-to parts. In this episode of Chalk Talk, Amelia Dalton chats with Namrata Pandya of ON Semiconductor about choosing the best op amp for your application.

Click here for more information about ON Semiconductor High Performance CMOS Operational Amplifiers