I once had a colleague who defined a “system” as “the thing one level of complexity higher than you can understand.”
I always thought there was a bit of profound insight in that definition.
We humans have a finite capacity for understanding, and that capacity does not benefit from the bounty of Moore’s Law. Our brains don’t magically think twice as fast every two years, or have the capacity to consider exponentially more factors each decade. Our limited brains also don’t compensate well for the layer upon layer of encapsulated engineering accomplishments of our predecessors. Even comparatively simple examples of modern technology can be deceptively complex – perhaps more complex than a single human brain can fully comprehend.
The historical engineering defenses against our limited human mental capacity involve two key mechanisms – collaboration and encapsulation. If we are designing some kind of electronic gadget and we can’t understand both the digital and the analog portions of our design, for example, we might take the digital side ourselves – and team up with an analog expert who can focus on the analog part. Somebody else takes over the mechanical design for the enclosure and stuff, and we may even reluctantly acknowledge that we’ll need other experts for things like manufacturing, testing, and (shudder) marketing and sales. Even though nobody on the team understands all the details of the whole product, as a group we are able to conceive, create, manufacture, distribute, market, sell, and support the thing. That’s the beauty and power of collaboration.
We also rely heavily on encapsulation. Somewhere, back in the old days, somebody figured out how to make logic gates out of MOSFETs. Somebody else figured out how to make processors, memory, and peripherals out of those gates. All those old folks quite helpfully encapsulated all that knowledge into handy microcontroller chips that we can simply drop into our design – even if we don’t have the foggiest clue how MOSFETs work. In practice, just about every design we create is somehow a hierarchical pyramid of encapsulated technology – most of which we don’t understand. We move ahead by riding the upper edge of these layers of abstraction, assembling the currently-encapsulated components into higher-level systems – which may themselves be encapsulated as building blocks of future projects. We build systems out of components we mostly don’t understand using tools we mostly don’t understand. Yet, somehow, we are able to understand the result.
These two coping mechanisms – encapsulation and collaboration – are imperfect at best. Frequently we see breakdowns where projects run astray because team members don’t share the same top-level vision for the thing they’re working together to create. Or, we see cases where the encapsulation of underlying technology is less than perfect, and systems fail at a higher level because of lower-level interactions between “encapsulated” technologies.
The idealistic side of our brains wishes that this weren’t true. It is easy to envision a world where every part of our system is designed from the ground up specifically for our purposes and to our specifications. In safety-critical systems, we often see a noble effort in that direction. In reality, however, with the complexity of modern technology, such a scenario is becoming increasingly more unrealistic. It is practically impossible to traverse the entire scope of a complex system and determine that each part properly serves the goals of the whole. A high-level software application may use a low-level driver that was compiled with an open-source compiler. That version of the compiler may have targeted an older version of a processor architecture where the bit-order of a word was later changed. That whole application may be part of a small subsystem which itself is activated only in the rare case where… You get the picture. Complex systems created by humans are, by their nature, incomprehensible to humans.
This fallacy that we can somehow completely understand the systems we design has been an underlying assumption in the creation of our engineering design flow itself. The traditional “waterfall” development processes assume that we can completely understand our system before we build it. It asks us to create a detailed specification, a project plan that will let us design something that meets that specification, and a testing plan that will allow us to evaluate whether or not we have succeeded. It is a methodology based on a belief that we can intelligently design a complex system in a vacuum, and then have that system perform well in the real world. With the reality of today’s complex hardware/software/connected systems, it is also a load of rubbish.
I’m going to point out the unpopular (in this forum) truth that software is much more complex than hardware. Because of this truth, software engineers hit the complexity wall earlier, realizing that the “waterfall” development concept was inadequate and unworkable for modern systems. The entire “agile software” movement is a reflection of that realization – that the only practical way to make robust software systems is to allow them to evolve in the real world, adjusting and adapting to problems as they are encountered. It is not enough to simply apply a bunch of really intelligent and extremely conscientious software developers to a problem. Whatever thing they design in a clean room will not work correctly. Their inability to predict all of the situations that may arise in real-world use and to visualize how their system should respond to those events will inevitably lead to problems.
There are those who argue that hardware has now hit that same complexity wall, and that the only workable solution is something like agile development for hardware as well. Whether you believe that or not, it should be obvious that our systems (of which software is an increasingly large component) have now crossed that line. Furthermore, now that we have systems that are parts of vast, interconnected, super-systems (like the IoT), it should be doubly clear that our technology is now fully in an evolutionary model, and the days of sending a team of bright engineers away to a nice quiet space to create something new on a blank sheet of paper (sorry, I meant “on a nice, responsive tablet”) are over.
Today, our systems are truly at a level of complexity higher than anyone can actually fully understand. Because of that, they will increasingly “evolve” rather than being “designed”. Perhaps, we will reach a point where system design is more akin to breeding livestock – choosing subsystems and components that seem to behave the way we want, connecting them together, and watching to see what happens.
In any case, the deterministic, control-oriented version of engineering we all learned in school will necessarily change to something else. Don’t panic, though. If engineering education prepared us for anything, it is the reality of constant change. That’s what keeps us employed. But, beyond employment, that’s what makes engineering a constant source of inspiration and excitement.