I’m feeling more than a little existential at the moment. Unfortunately, I don’t think there’s a cream for that. I was going to say, “Don’t worry, it’s not catching.” However, the more I think about what I’m about to tell you, the more I fear we might discover that it is.
The idea that bumblebees shouldn’t be able to fly dates back to the 1930s. According to an oft-repeated anecdote, a French (it goes without saying) aerodynamicist supposedly calculated that a bumblebee’s wings were too small to support its body weight. It must have been more than a tad awkward that the little scamps continued to fly regardless.
As we now know, the problem wasn’t with the bumblebees—it was with the French model he was using. His calculations treated bees like tiny airplanes with rigid wings, but bees don’t fly like airplanes. Although I’m not entirely sure how they learned to do this, bumblebees fly using unsteady aerodynamics (which really is a thing), a flight mechanism that’s very different from the steady airflow model used to describe airplane wings.
I’ve been thinking a lot about life recently. This may be because, later this year, I’ll be celebrating the 23rd anniversary of the 23rd anniversary of my 23rd birthday. It’s not often you get to say that, which is why I’ll be saying it a lot in the weeks to come. Of course, the next problem is defining just what we mean by “life.” People struggle with this because life sits right at the intersection of biology, chemistry, and philosophy, and the boundary between “living” and “non-living” is surprisingly fuzzy.
Most biology textbooks describe life using a set of characteristics, such as metabolism (the use of energy), growth, reproduction, evolution, and responses to stimuli. The trouble is that no single property defines life on its own, and many non-living systems exhibit one or more of them. A few classic, oft-touted examples include crystals (which grow but aren’t alive), fire (which consumes energy and spreads but isn’t considered living), and computers (which respond to stimuli but aren’t organisms).
Some things sit awkwardly on the boundary between living and non-living. Viruses are the classic example because they contain genetic information, evolve, and reproduce, but only inside host cells. Outside a host cell, a virus is essentially an inert package of chemicals. Because of this, scientists still debate whether viruses are alive. Other strange edge cases include prions (infectious proteins), viroids (tiny RNA pathogens), and synthetic minimal cells, each of which pushes the boundary of what we mean by life. A wag once summarized the situation as follows: “We know life when we see it—except when we don’t,” and it’s hard to argue with logic like that.
One argument that viruses aren’t alive is that they can’t generate everything they need to survive on their own. But this is a slippery slope. The concept of “self-sufficiency” isn’t sufficient unto itself (no pun intended) because many unquestionably living organisms depend on others for critical things. Even your humble narrator, who finds himself perched at the pinnacle of human evolution, is unable to self-synthesize things like vitamin C and the nine essential amino acids (lysine, tryptophan, methionine…), which bear the “essential” moniker because we must obtain them from food. This seems a tad unfair when plants and many microorganisms can synthesize them from scratch. At the very least, it would make sense for our bodies to enter into a mutually beneficial (symbiotic) relationship with bacteria to provide nutrients and metabolic functions that our own bodies cannot perform alone (like the one we have with bacteria like Bacteroides and E. coli that produces vitamin K, which is essential for blood clotting).
Things become even more confusing when you realize that life appears to be an emergent behavior; that is, something that arises from a complex collection of chemical processes rather than from any single defining feature and without the need for an intelligent designer. Regarding the latter, evolution produces workable solutions, not perfect designs. For example, the human body doesn’t look like something designed on a clean sheet of paper. It looks more like a system that has been under continuous development for hundreds of millions of years, with “new improved model” features bolted on to old ones.
Proponents of intelligent design often point to the eye as evidence that such a complex structure could not have evolved naturally. Personally, I find the opposite argument rather persuasive. In the human eye, the retina is “wired” in a way many engineers would consider suboptimal because light must pass through layers of neurons and blood vessels before reaching the photoreceptor cells (why would an intelligent designer do things this way?). By contrast, in animals like octopuses and squid, the photoreceptors face the incoming light and the “wiring” runs neatly behind them—an arrangement that looks decidedly more sensible. Bobby Henderson addressed all this much better than I can in his Open Letter to the Kansas School Board (2006), but we digress…
For many moons, my go-to sources for mindboggling insights into life have been the books Wetware: A Computer in Every Living Cell by Dennis Bray and Life’s Ratchet: How Molecular Machines Extract Order from Chaos by Peter M Hoffmann. This latter tome makes a compelling case that the emergence of living order from lifeless chemistry is not miraculous, but rather it’s what happens when physics, chemistry, and evolution get enough time to play together. I’m also a fan of Imagined Life: A Speculative Scientific Journey among the Exoplanets in Search of Intelligent Aliens, Ice Creatures, and Supergravity Animals by James Trefil and Michael Summers.
Well, I just added a new entry to the canon: How Life Works: A User’s Guide to the New Biology by Philip Ball. On the bright side, this book answers many of the hitherto unanswered questions I had. On the downside, it’s introduced a cornucopia of new conundrums that have left my brain wobbling wildly on its gimbals.

Just to set the scene, DNA was first discovered by Friedrich Miescher in 1869, and its role as the carrier of genetic information was demonstrated by Oswald Avery and his colleagues in 1944. However, the famous double-helix structure of DNA was not determined until 1953 by James Watson and Francis Crick. Even then, it took another decade of research before scientists deciphered the “genetic code,” revealing how sequences of DNA specify the amino acids used to build proteins.
Once it became clear that DNA contained instructions for building proteins, the metaphor of DNA as a “blueprint” or “instruction manual” became popular in science writing and textbooks. The idea was that (a) DNA contains coded instructions, (b) those instructions determine how cells build proteins, and (c) proteins then perform most cellular functions.
But there’s a problem (isn’t there always?). Life (as we know it) is largely based on the generation and interaction of proteins. The human genome contains roughly 20,000 protein-encoding genes, and the proteins encoded by these genes serve as the tiny molecular machines that drive the chemistry of life.
Ever since I first heard this, I visualized these proteins as tiny LEGO bricks. Now, 20,000 is not a small number, but it doesn’t seem sufficient to “build” something as awesome as… well, me. This isn’t helped by the fact that a banana has ~36,500 genes. Now, although I think we can all agree that bananas are an amusingly shaped fruit, knowing that they have close to twice as many genes as we do certainly makes me look at them from a different angle.
And so we return to Philip Ball and his bodacious book. There’s so much I want to talk about here, but there’s far too much for one column, so I think the best way to proceed from this point is to boil a bunch of the Philip-inspired thoughts that are bouncing around my poor old noggin into a series of fabulous factoids as follows.
Biologists often talk about the “genome,” which is the complete set of DNA in an organism. For humans, the genome contains about 3.2 billion DNA “letters” (nucleotides) arranged into 23 pairs of chromosomes. A useful companion term is the “phenotype,” which refers to the observable characteristics of an organism—everything from eye color and height to metabolism and behavior. Until recently, biologists might have said something like: “The genome contains the instructions, while the phenotype is the result of running those instructions, although the relationship between the two is far from straightforward.” By comparison, if I were to put words into Philip Balls’ mouth, he might say, “Who are you, and why are you putting words into my mouth?” Alternatively, if he were in a less grumpy mood, he might tell us, “The genome contains the instructions, while the phenotype is what happens when those instructions interact with the environment and with each other.”
As we already noted, humans have about 20,000 protein-coding genes, but those genes occupy only about 1% to 2% of the genome. For many years, biologists referred to the rest as “junk DNA,” and this name stuck largely because scientists had no idea what most of it did. Today, we know that much of this non-coding DNA plays important roles in regulating gene activity, controlling when genes turn on and off, and shaping how organisms develop. In other words, much of the genome acts less like a parts list and more like a complicated control system.
Proteins are the molecular machines that perform most of the work inside cells, but genes don’t produce proteins directly. The process involves a step called transcription, by which a segment of DNA is copied into messenger RNA (mRNA), and the mRNA is then used as the template for protein synthesis (i.e., “building” a protein).
DNA stores information using four chemical “letters”: A, C, G, and T. These letters are read in groups of three, called codons. Each codon specifies one amino acid, the building block of proteins. A protein is essentially a long chain of amino acids assembled according to the sequence of codons in a gene.
For a long time, scientists assumed that a single gene produced a single protein in a one-for-one fashion, but we now know that’s not what actually happens. Genes often contain sections called exons (coding segments) and introns (intervening segments that are removed). During transcription, the RNA copy can be spliced in different ways using a process called “alternative splicing.” Different combinations of exons can be joined together to produce different proteins from the same gene. Because of this, our ~20,000 protein-coding genes can produce well over 100,000 distinct proteins (possibly several hundred thousand when chemical modifications are included).
Early models of evolution focused on small mutations, such as changing a single letter in DNA. These do occur, but modern biology suggests that evolution often works by rearranging larger chunks of genetic material. These chunks include exons, regulatory sequences, and entire genes. Mobile pieces of DNA called transposons (sometimes known as “jumping genes”) can move around the genome, reshuffling genetic information. This kind of rearrangement can produce new genetic combinations much faster than single-letter mutations.
Perhaps the most important discovery of modern biology is that biological complexity depends less on the number of genes and more on how those genes are controlled. Genes are turned on and off by regulatory DNA elements such as promoters (where transcription begins), enhancers (regions that increase gene activity), and silencers or repressors (regions that reduce gene activity). These regulatory elements form networks that determine when, where, and how strongly a gene is expressed. In other words, they form a kind of biological control layer sitting on top of the protein-coding genes.
Traditional evolutionary thinking imagined change as a slow accumulation of small genetic mutations. But modern research suggests that some major evolutionary transitions—such as the Cambrian explosion about 540 million years ago—may have been driven by expansions in gene regulatory networks. In particular, the emergence of Hox genes and other developmental regulators allowed organisms to control body plans in far more sophisticated ways. Instead of inventing entirely new proteins, evolution learned how to use existing proteins in new ways.
Perhaps the most remarkable feat of biology is how a complex organism develops from a single fertilized cell. That first cell—the zygote—divides and keeps on dividing: 1 → 2 → 4 → 8 → 16 → … Initially, these cells are nearly identical. Over time, however, they begin to differentiate by activating different sets of genes. A skin cell, a nerve cell, and a muscle cell all contain the same DNA. What makes them different is which genes are turned on and off.
Cells determine their roles by communicating with their neighbors. They exchange information using chemical signals (signaling molecules), mechanical forces, and electrical signals. One common mechanism involves “morphogen gradients,” in which a signaling molecule is released from one region of a developing tissue. Cells detect the concentration of the signal around them and, depending on that concentration, they activate different genes. In this way, cells can determine their position within a developing body.
Many biological structures emerge naturally from these interacting signals. Just as physical systems tend toward low-energy configurations, biological development often converges on stable patterns. These processes help explain why organisms tend to produce consistent structures such as two arms, two legs, five fingers, and predictable arrangements of bones. The final form of an organism is therefore not explicitly encoded in DNA like a blueprint. Instead, it emerges from networks of interacting genes, proteins, and cells.
The upshot of all this is that DNA does not function like a traditional engineering blueprint. Rather, it behaves more like a program running on a massively parallel biological computer, where the final outcome emerges from countless interacting control loops, with a healthy dose of random “noise” thrown in for good measure, as molecules bump into each other, reactions fluctuate, and genes switch on and off in slightly unpredictable ways.
As someone who has spent a large part of his career designing and analyzing complex control systems, I find all of this to be both astonishing and disturbing. If I were asked to design a system composed of billions of components interacting through countless feedback loops, riddled with stochastic noise, and operating without anything resembling a central controller, I’d probably declare the project infeasible. And yet biology does exactly that, somehow resulting in wondrous creations like bumblebees, bananas, and people.
All of which leads me to a slightly uncomfortable conclusion: just as the mathematicians once proved that bumblebees couldn’t possibly fly, the control engineer in me feels obliged to point out that humans cannot possibly exist. The awkward thing, of course, is that we clearly do.


