feature article
Subscribe Now

Encapsulating Engineering

Construction at the End of the Road

Those who build roads are fundamentally different from those who use roads.

Those who build roads immerse themselves in every detail of the route. They know the distances, the hills and valleys, the rivers and forests, the grades and angles, the weather, and the wildlife. They have considered every aspect of the particular journey and imagined and re-imagined the trips of the travelers to come. They have contemplated every contingency, every possibility, in an attempt to craft a safe, smooth and seamless experience.

Those who use the roads are insulated from those myriad details. They ride along with the cruise control set at speed limit plus four, GPS ticking off the miles until the exit, wondering quietly whether their podcast episode will finish before or after this stretch of highway terminates. If the road engineers did their jobs well, the driver’s day will be completely unremarkable, the requirements on their skills and awareness minimal, their safety and security all but assured.

It is the duty of the engineer to over-think the problem, to come up with the most elegant and robust solution possible, to do the job once, right, so that particular design will never have to be designed again. In electronics, many of us both create and use what we glibly refer to as “IP” (Intellectual Property). We engineer for re-use and we advocate design practices that walk in the well-tried footsteps of those who came before. We want our work to be the highway for the engineer of the future. We imagine the journey of our re-user as safe and smooth, insulated from the details and hazards we encountered in our initial trail-blazing trip.

Encapsulation is the art of making our designs reusable and extensible, of doing it right the first time so it doesn’t have to be done again, of paving the way for imagined future generations of engineers, so that they may smoothly drive to the ends of the roads we have built and then create their own construction projects to enable the exploration of frontiers that lie beyond.

For the past fifty years, integrated electronics has been on a rocket ride of exponential engineering progress. Brilliant minds have spent their careers taking technology that could put a handful of transistors on a chip and evolving it into substantial stacks of multi-disciplined IP, tools, and processes that enable today’s kids to surf silicon tsunamis with billions of transistors working in concert.

Viewed at a distance, the end product of the past fifty years of electronic engineering is the globally connected computing network. Although we all work on different flavors of chips, boards, gadgets, and systems, most of those creations end up integrating into this massive connected computing fabric in one way or another. And, while the hardware aspect of that massive collaboration continues to evolve, particularly with the current explosion of the Internet of Things (IoT), it is unquestionable that the bulk of the engineering that comes next will be software design.

Software engineers are fundamentally different from hardware engineers. Of course, some understanding of software is required in order to function as a hardware engineer these days, but there is a definite divide between the true professionals in the two disciplines. Viewed one way, hardware engineers have paved a freeway for today’s software developers. Ignoring for a moment some lawless border territory in the area of operating systems, APIs, and driver stacks, software engineers can set the cruise control and glide smoothly over the pavement of thousands of man-years of hardware engineering progress, listening to their favorite playlists without spilling a drop of their grande lattes. 

This does not mean the software engineer is an inferior creature – any more than we would think of those who design logic as less informed or skillful than those who design transistors. In the same way that there is a hierarchical stack of encapsulated enabling technologies, there is a corresponding hierarchical stack of engineering experts, ranging from semiconductor process engineers at the primordial end of the food chain to Ruby on Rails whiz kids who probably couldn’t tell you what a transistor does. Every level of this technology structure must be continually populated with savvy, competent domain experts who can maintain the status quo while (hopefully) advancing the states of the art in their particular specialties.

This is where a note of caution is in order. In technology, there is no “farm to table.” There are no interesting electronic engineering projects remaining that don’t build upon layers of existing encapsulated technology. The maker doesn’t operate in a vacuum; he depends on the Arduino board, the 3D printer, and the various compilers, tools and assorted products of the large-scale industrial engineering ecosystem. In order for the top of the pyramid to press ever skyward, the base and interim structural layers must be constantly maintained. 

Feeding all these levels of technology with fresh engineering talent is left to the vagaries of the free market. We don’t have a structured system to make sure that we have enough wafer production experts, lithography ninjas, EDA gurus, logic design savants, and wireless communication monks to keep a solid footing underneath the hordes of mobile app developers, social media startups, and IoT gadgeteers.

The glamour today is at the end of the road – blazing trails off into new horizons in big data and massively interconnected wizardry. The best and brightest engineering students are drawn to those stars, hitching their wagons and investing their talents – building the skills that enable the creation of those edge-of-possibility inventions. There is less excitement in maintaining the roads that grant access to those frontiers. Filling potholes in the pavement of previous generations is far less attractive than designing suspension bridges spanning the unknown abyss of the future.

We can hope that supply and demand will respond adequately, and that the rare skills required to feed and care for each layer of the incredible technology stack that we have collectively created over the past fifty years will be in available in sufficient quantities. Otherwise, we run the risk of the scaffolding collapsing on itself. It is not impossible to imagine a world where there are no brilliant minds remaining who understand complex physical layout algorithms or advanced architectures for arithmetic logic. If we reach a juncture where those black boxes need to be re-opened and repaired, we could well find ourselves stranded. 

Today, however, the money is at the top of the stack. Venture funds are easy to come by for the latest mobile app or the newest social networking idea. Try to start a new EDA or semiconductor company, however, and the piggy bank shrinks to just about zero. The same goes for acquisitions and IPOs – with end application companies valued in the billions and infrastructure companies in the millions. That 1000x difference is easily noticed by the young and intelligent, and they craft their careers to chase the big rewards, hoping for a home run of their own. In the lower flight levels, the burden is on big, established corporations to maintain and grow the businesses in which they amassed their fortunes and, at the same time, be the stewards of their component of the technology infrastructure.

Let’s all hope the structure stays steady.

3 thoughts on “Encapsulating Engineering”

  1. Not sure it’s a good analogy (the “road” thing). The way we write software (C/C++) is very dependent on how the machines work, so there’s a lot of interdependency that shouldn’t be there in the “stack”. The “top” of the current stack is reminding me of how MS Windows got built on top of DOS, and it all got thrown out eventually. I.e. the current stack is based on a Von Neumann computing methodology that should have died off a long time ago, but Intel kept it going well past it’s prime by scaling faster then others could innovate, and now we’ve hit the end of that road I suspect “steady” will go the way of DOS.

  2. “Try to start a new EDA or semiconductor company, however, and the piggy bank shrinks to just about zero.”
    Yep sadly I confirm, this is consistent with what I’m seeing as co-founder of Synflow.

    “That 1000x difference”
    Let me tell you why I think there is such a difference. VCs have to be extremely smart to survive: this is purely evolutionary, too many bad investments and they’ll run out of money — and therefore, business. What do they see when they look at EDA and semiconductor? EDA and semiconductor are already consolidated. Big EDA sells to big semiconductor, and that’s it. EDA tools suck (I suspect that this might be due to a lack of good coders), innovation is very slow and the adoption of innovative solutions is even slower. This is a dying industry. There’s growth, sure, but it’s nothing like the exponential growth that investors crave.

    But more than numbers, because growth and market sizes only tell you so much, the biggest problem in my opinion is culture.

    Think about it. Most people in semiconductor and EDA were already working in this industry more than 20 years ago. 20. YEARS. AGO. Look at everything that’s changed in software for the last twenty years, and now you should understand a lot of what’s wrong with semiconductor. People were writing RTL at the time, and they’re writing RTL now. 20 years ago, the World Wide Web itself was still an infant. The few dynamic websites back then were written in C. I could go on and on, but I think you get the point 🙂

    Could this situation change? Yes I think so, if we had new companies appear that suddenly needed to design integrated circuits. This won’t come from existing semiconductor companies of course, but it could be a by-product of the Internet of Things.

  3. Also, I suspect that a reason why a lot of people will try to use software (even to a point where it might not make sense) is because they want to avoid hardware as much as possible. I don’t blame them.

    Being originally a software guy, I was pretty horrified about the whole FPGA experience. Just getting your design to run on the board was an achievement. Note that I consider the problem of getting the design done in the first place to be solved thanks to the Cx language, but before that you had to design with Verilog or VHDL (imagine the barrier that this is for software people!).

    We’re doing our part with Synflow in improving the “out of the box” experience with FPGA, and if we have a chance we will eventually get there. For that we need funding, and I think that all FPGAs companies (and many semiconductor companies too) would need to think about it!

Leave a Reply

featured blogs
Dec 6, 2023
Explore standards development and functional safety requirements with Jyotika Athavale, IEEE senior member and Senior Director of Silicon Lifecycle Management.The post Q&A With Jyotika Athavale, IEEE Champion, on Advancing Standards Development Worldwide appeared first ...
Dec 5, 2023
Generative AI has become a buzzword in 2023 with the explosive proliferation of ChatGPT and large language models (LLMs). This brought about a debate about which is trained on the largest number of parameters. It also expanded awareness of the broader training of models for s...
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at intel.com/leap. The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Automated Benchmark Tuning
Sponsored by Synopsys
Benchmarking is a great way to measure the performance of computing resources, but benchmark tuning can be a very complicated problem to solve. In this episode of Chalk Talk, Nozar Nozarian from Synopsys and Amelia Dalton investigate Synopsys’ Optimizer Studio that combines an evolution search algorithm with a powerful user interface that can help you quickly setup and run benchmarking experiments with much less effort and time than ever before.
Jan 26, 2023