feature article
Subscribe Now

Combatting Complexity

From Intelligent Design to Evolution in Engineering

I once had a colleague who defined a “system” as “the thing one level of complexity higher than you can understand.”

I always thought there was a bit of profound insight in that definition.

We humans have a finite capacity for understanding, and that capacity does not benefit from the bounty of Moore’s Law. Our brains don’t magically think twice as fast every two years, or have the capacity to consider exponentially more factors each decade. Our limited brains also don’t compensate well for the layer upon layer of encapsulated engineering accomplishments of our predecessors. Even comparatively simple examples of modern technology can be deceptively complex – perhaps more complex than a single human brain can fully comprehend. 

The historical engineering defenses against our limited human mental capacity involve two key mechanisms – collaboration and encapsulation. If we are designing some kind of electronic gadget and we can’t understand both the digital and the analog portions of our design, for example, we might take the digital side ourselves – and team up with an analog expert who can focus on the analog part. Somebody else takes over the mechanical design for the enclosure and stuff, and we may even reluctantly acknowledge that we’ll need other experts for things like manufacturing, testing, and (shudder) marketing and sales. Even though nobody on the team understands all the details of the whole product, as a group we are able to conceive, create, manufacture, distribute, market, sell, and support the thing. That’s the beauty and power of collaboration.

We also rely heavily on encapsulation. Somewhere, back in the old days, somebody figured out how to make logic gates out of MOSFETs. Somebody else figured out how to make processors, memory, and peripherals out of those gates. All those old folks quite helpfully encapsulated all that knowledge into handy microcontroller chips that we can simply drop into our design – even if we don’t have the foggiest clue how MOSFETs work. In practice, just about every design we create is somehow a hierarchical pyramid of encapsulated technology – most of which we don’t understand. We move ahead by riding the upper edge of these layers of abstraction, assembling the currently-encapsulated components into higher-level systems – which may themselves be encapsulated as building blocks of future projects. We build systems out of components we mostly don’t understand using tools we mostly don’t understand. Yet, somehow, we are able to understand the result. 

These two coping mechanisms – encapsulation and collaboration – are imperfect at best. Frequently we see breakdowns where projects run astray because team members don’t share the same top-level vision for the thing they’re working together to create. Or, we see cases where the encapsulation of underlying technology is less than perfect, and systems fail at a higher level because of lower-level interactions between “encapsulated” technologies.

The idealistic side of our brains wishes that this weren’t true. It is easy to envision a world where every part of our system is designed from the ground up specifically for our purposes and to our specifications. In safety-critical systems, we often see a noble effort in that direction. In reality, however, with the complexity of modern technology, such a scenario is becoming increasingly more unrealistic. It is practically impossible to traverse the entire scope of a complex system and determine that each part properly serves the goals of the whole. A high-level software application may use a low-level driver that was compiled with an open-source compiler. That version of the compiler may have targeted an older version of a processor architecture where the bit-order of a word was later changed. That whole application may be part of a small subsystem which itself is activated only in the rare case where… You get the picture. Complex systems created by humans are, by their nature, incomprehensible to humans.

This fallacy that we can somehow completely understand the systems we design has been an underlying assumption in the creation of our engineering design flow itself. The traditional “waterfall” development processes assume that we can completely understand our system before we build it. It asks us to create a detailed specification, a project plan that will let us design something that meets that specification, and a testing plan that will allow us to evaluate whether or not we have succeeded. It is a methodology based on a belief that we can intelligently design a complex system in a vacuum, and then have that system perform well in the real world. With the reality of today’s complex hardware/software/connected systems, it is also a load of rubbish.

I’m going to point out the unpopular (in this forum) truth that software is much more complex than hardware. Because of this truth, software engineers hit the complexity wall earlier, realizing that the “waterfall” development concept was inadequate and unworkable for modern systems. The entire “agile software” movement is a reflection of that realization – that the only practical way to make robust software systems is to allow them to evolve in the real world, adjusting and adapting to problems as they are encountered. It is not enough to simply apply a bunch of really intelligent and extremely conscientious software developers to a problem. Whatever thing they design in a clean room will not work correctly. Their inability to predict all of the situations that may arise in real-world use and to visualize how their system should respond to those events will inevitably lead to problems.

There are those who argue that hardware has now hit that same complexity wall, and that the only workable solution is something like agile development for hardware as well. Whether you believe that or not, it should be obvious that our systems (of which software is an increasingly large component) have now crossed that line. Furthermore, now that we have systems that are parts of vast, interconnected, super-systems (like the IoT), it should be doubly clear that our technology is now fully in an evolutionary model, and the days of sending a team of bright engineers away to a nice quiet space to create something new on a blank sheet of paper (sorry, I meant “on a nice, responsive tablet”) are over.

Today, our systems are truly at a level of complexity higher than anyone can actually fully understand. Because of that, they will increasingly “evolve” rather than being “designed”. Perhaps, we will reach a point where system design is more akin to breeding livestock – choosing subsystems and components that seem to behave the way we want, connecting them together, and watching to see what happens.

In any case, the deterministic, control-oriented version of engineering we all learned in school will necessarily change to something else. Don’t panic, though. If engineering education prepared us for anything, it is the reality of constant change. That’s what keeps us employed. But, beyond employment, that’s what makes engineering a constant source of inspiration and excitement.

3 thoughts on “Combatting Complexity”

  1. Even though Moore’s Law makes our systems twice as complex every two years, we still use the same old brains to design them. At some point, that means our systems are more complex than we can understand. (And, for some of us, that happens sooner than for others…)

    What happens then?

  2. In a war we have Generals, with a chain of command, and significant specialization, all the way down to foot soldiers and cooks. None of them are expected to know the job, and do it well, from top to bottom.

    It’s foolish to expect members of engineering teams to have the same grand breadth.

    We just need to accept the “architect” has the vision to direct the product evolution in a responsible way that will meet the market goals. With that other people can build teams to execute the implementation of each subsystem that is tasked. Using specialists, to deal with the nail biting complexity of each narrow part of the problem.

  3. I’ll provisionally agree that software is more complex than hardware, and has hit the “complexity wall” sooner.
    This however, has more to do with economic realities related to software production than any real necessity for software to be that complex.
    Wordstar was distributed for years on a 140k floppy. Only slightly more functionality appears on 6 CDs or 2 DVDs today.
    Only with the advent of many core and multicore cpu architectures has there been a real reason for greater complexity. The “code bloat” created by a software industry that is responsive to perceived market demand is the real reason for software complexity.
    Push it out the door and let the next generation of hardware solve the speed issues of a slow piece of software, has been the battle cry for a few decades.
    And the experts that studied massive parallization strategies immediately pronounced the same design goals that were required to get work done when machine resources were constraining factors.
    Back to the basics.
    Go figure.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

High Voltage Intelligent Battery Shunt
Sponsored by Mouser Electronics and Vishay
In this episode of Chalk Talk, Scott Blackburn from Vishay and Amelia Dalton explore the what, where, and how of intelligent battery shunts. They also examine the key functions of battery management systems, the electrical characteristics of high voltage intelligent battery shunts and how you can get started using a high voltage intelligent battery shunt for your next design.
Dec 4, 2024
1,987 views