Let me first make one thing perfectly clear: I’m proud to be a software engineer. For the first twenty years plus after I graduated from engineering school, I worked as a software engineer and managed teams of software engineers. We did good work, and I took pride in our accomplishments. I worked with some of the brightest, most innovative, hardest-working engineers I have ever met, and we built some amazing technology.
During the time I was in software, the art of software engineering evolved and matured dramatically. The languages, methods, paradigms and best practices underwent spectacular change. Since then, software engineering has evolved even more. This change is a good thing, because, as an engineering discipline, software engineering – what’s the technical term? Oh, yeah. “Sucks.”
Because software engineering is the newest and least mature of the established engineering disciplines. Sure, some of the best minds in the world have been working diligently on the problem of software engineering for several decades now, and considerable progress has been made. But software is the most complex thing ever created by humans. And the amount and complexity of software the world needs far outstrips the output of all of the world’s software engineers combined. We have an enormous gap between the quantity and quality of software that is possible and useful, and the amount and quality of software engineering can deliver. And the problem is only going to get worse.
Check any team you can find developing systems that include both hardware and software. In just about every case, the software engineers outnumber the hardware engineers by a large factor. In just about every case, it will end up being the development of the software that drives the schedule and the eventual release of the product. And, in just about every case, a difficult judgment call will have to be made about whether the software is yet “good enough” to ship.
One of the biggest vectors driving this gap is Moore’s Law. Moore’s Law has forced a steady, five-decade, exponential increase in computing power. On top of that, the explosion of the IoT has created an even bigger vacuum in the software space. We now have a global hardware infrastructure with the computing, sensing, actuating, and storing capacity to transform life on Earth in wonderful and terrifying ways, and we barely have any idea how to program it correctly.
At the beginning of my career, I worked at a company that developed and sold place-and-route software for ASIC design. We had a goal – to be able to successfully place-and-route gate arrays with 10,000 gates (which was enormous at the time). In order to do that, we had written something like (if memory serves) one million lines of FORTRAN. We had what we considered to be a rigorous software development process – particularly for a company with 50-60 engineers and no computer.
Oops, did I just say we did not own a computer? It’s true. We leased time on a computer that was approximately a thousand miles away. Our engineers would write FORTRAN code on special forms (one character per box, like filling out your tax return). Then, another engineer would “desk check” the code on the forms, manually reviewing it for potential errors. They had to sign a little “desk checked by” line on the coding form, and acknowledging that they were now to blame if problems turned up in the code they had reviewed. Then, a keyboard operator manually typed in the new code using a remote terminal with a phone modem. A thousand miles away, the new code was checked into our code base on giant washing-machine-sized disk drives connected to a VAX. Overnight, the new version of the software system would be compiled and run against a bevy of regression tests. The next day, the results and listings from those batch runs were printed out and Fed-Exed back to our offices.
I may be glamorizing that early-1980s development process a bit from my memory. But the truth was, our software worked. It was the best in the industry. It enabled chips to be designed that would have otherwise been impossible. It moved electronics technology forward in a meaningful way.
But, when I say it “worked,” I am being a bit generous. It never, ever worked the first try, or the second. We’d get a new netlist from a new customer and we’d build the model of the base array and models for all of the components used in the customer’s design. We’d then run the whole thing through our system and it would crash. Badly. Every Single Time. Then, we’d pick up the pieces, sort through the printouts, and spend days-to-weeks finding and fixing all the places it broke. Eventually, we’d get that one design to work for that one customer. We cheered. It was a modern engineering miracle, a triumph of our team spirit.
Fast forward ten years and the software engineering universe had changed. I was working in a new company on a team where every single software engineer had their own high-powered engineering workstation. All of those workstations were networked (via a problematic “token ring” network, it turns out, but networked nonetheless). We programmed in Pascal. We had dynamic memory allocation, compilers, interactive debuggers, and more. We were creating software for schematic capture, simulation, and analysis of ASIC and PCB designs, and engineering teams all over the world were depending on our software for their design work.
Bug reports flowed in by the thousands. There were so many bugs, we barely had the capacity to classify, count, and track them all – let alone make any meaningful dent in the backlog by actually fixing them. Software update releases became grueling multi-year projects, bogged down by bug backlogs. There had to be a better way – a way to make more robust, maintainable, upgradable software. We were determined to find it.
Our solution was that we were the earliest adopters of object-oriented design in our industry. We decided to rewrite our entire code base – over eleven million lines, it turns out – in object-oriented C++. At the time, there was not even a real C++ compiler in existence. Instead, we used a translator called Cfront, which converted our object-oriented C++ code into “mangled” C code, which was then compiled with a normal C compiler. When it came time to debug, however, we were stuck trying to debug the auto-generated C code – with its hundred-character function and variable names, and then somehow mapping the problems we saw there back to the original source code we had written. It was … sub-optimal.
We continued to evolve. A decade later, I was working on high-level synthesis software, and we had sophisticated compilers, debuggers, version control (with check-in, check-out, and merge), configuration management, and robust automated regression testing. We had disciplined vetting and prioritization of bug and enhancement requests. We tracked back deliverables against marketing requirements. We even had the beginnings of what would now be called Agile software development.
We were worlds more sophisticated than my twenty-years earlier team. And now, when a new customer sent us a new design to run through our system… It crashed. Just about every time.
Today, software development has progressed significantly since the time I was in software engineering. And yet, today, in just about every technology product I buy, software is still the weak link. There will always be problems and bugs and clunky things with a new gadget, and those problems and bugs and clunky things are almost always fixed via a subsequent software patch. To be fair, though, in just about every product I buy, software is also delivering most of the value. The “hardware” portion is often just some kind of generic embedded computing system connected to some collection of sensors, actuators, and human interface devices. All of the behavior that makes it useful comes from software.
The thing is, software is hard – and it just keeps getting harder. Being an expert in software almost always means that you have to be an expert in at least two things – software development, and the discipline in which your application is working. In my case, we had to be expert at both software engineering and chip design. If you’re working on medical software, you have to be expert at both software and medicine. And this is a gross oversimplification. Each of these disciplines has sub-disciplines that are critical to software development – mathematics, signal processing, physics, chemistry, probability – the list goes on and on.
In reality, software engineers are constantly trying to create systems that can outperform human counterparts. We made place and route software because no human engineer could complete the task in any reasonable amount of time. It’s the same with most areas of software development.
As software permeates every aspect of our lives, the need for skilled software engineers grows exponentially. The need for better software development processes, languages, and tools grows as well. There is almost no profession that doesn’t need experts fluent in software development – as well as the other aspects of that profession. Our educational system needs to consider this. In many professional fields – engineering, medicine, and many more – there may well be a need for more programmer-experts than for actual practitioners.
As the proverb goes – give a man to fish, and you feed him for a day. But write him a program that can do it…