“Beware of false knowledge; it is more dangerous than ignorance.” — George Bernard Shaw
Spoiler alert: We’re ignorant and we like it.
Not stupid, necessarily – just uninformed, in the dark, unaware. It’s part of who we are. In fact, we depend on it.
All of cryptography is based on secrecy and ignorance. You and I know the key, but the bad guys don’t. That one secret (or a collection of them) is all that prevents them from taking our stuff. You know the combination to your bike lock but nobody else does. Those are practical kinds of ignorance.
But there’s also a huge cultural value to being uninformed. (Insert political joke here.) Every society in the world expects its members to be largely ignorant of each other. We call it privacy. You don’t know everything your neighbor is doing. You can’t recognize every person on the street or tell the locals from the out-of-towners. The price of beets next month is anybody’s guess. We can’t predict the weather very well. And we’re generally okay with that.
We even amuse ourselves with our own ignorance. Stories, movies, and plays often have surprise endings, and we don’t want to know the surprise until the end. Space aliens may someday wonder why we set ourselves up for this unnecessary anticipation. If we’re so interested in the outcome, why not just jump to the last page of the book? If a friend spoils the movie for us, doesn’t that save us time? Gosh, thanks, I didn’t know that Old Yeller dies at the end. I’ll go watch The Sixth Sense instead.
Same goes for sporting events. Nobody watches the last two minutes of a horse race or a football game just to see who wins. Instead, we watch from the beginning and studiously avoid checking our phones for fear of revealing the outcome before it plays out in front of us in real time. It’s artificial anticipation, and we love it.
But time, progress, and technology are slowly peeling back the frontiers of ignorance, and it’s having a strange effect on society. We’ve been in the dark for so long that we’re not sure we want to know what we don’t know.
The entire insurance industry is based on ignorance. That is, we don’t know who will get sick, or when, or whose fence will collapse and the cost to fix it. So, we pool our resources and place a statistical bet, hoping against hope that the money spent on insurance will be wasted. After all, we don’t want to get sick or to have our fences blow over, but, if the worst happens, at least we’ve paid in advance for the repairs.
Economics are affected, too. Backyard barbecues don’t come with a tax to offset the fire department’s increased likelihood of an abrupt visit. Taxes are always applied unfairly because we’ve never been able to assess exactly how much each person uses the public roadways, utilities, armed forces, parklands, and other community benefits. We don’t know those things, so we guess (and grumble about unfair taxes).
But as we learn more about causality, Boole’s inequality, and other arcane aspects of probability and underwriting, we find that we actually can predict certain events with a higher degree of accuracy than heretofore available. If you’re a cigarette smoker, you’re more likely to get lung cancer than a nonsmoker. Certain ZIP codes correlate with higher degrees of crime. And so on. That changes the game.
We’re also discovering DNA markers for disease. A blood test can identify a genetic predisposition to certain ailments. Not a guarantee that you’ll get sick next Wednesday, of course, but a statistical deviation from the norm – for whatever peer group you define as the norm.
How does this affect insurance? Medical care? Hiring practices? If your prospective employer knows (or can find out) that you’re more likely to get sick than the other candidate, are they ethically bound to hire you anyway? Is your insurance company allowed to raise your rates? Is the hospital allowed to charge you extra? At what point does “bad luck” become “completely preventable negligence?”
These aren’t hypothetical questions; we’ve already started. Auto insurance is cheaper if you let the company track your car’s motions and tell them where you live, where you work, and where you normally park. (They already know the type of car and its color, because that’s statistically significant public information.) The more data they have, the better they can assess the risks – but is that what we want? Aren’t we happier not knowing? Ignorance is bliss, but it’s exactly what actuaries don’t want. Quite a few people have volunteered for radical surgery based on nothing more than the statistical probability that they might eventually contract a certain ailment later in life.
It’s an old science fiction trope. Would you want to know exactly how long you’re going to live? And how would that knowledge affect your behavior, and thus, your lifespan, triggering a kind of Heisenberg Uncertainty Principle around your own existence.
What knowledge is good knowledge, and when is ignorance preferred? Would it be fair for a professional sports team to test young recruits’ DNA to predict performance, or should we just let the games play out on their own? More subtly, what if we knew exactly what diet and exercise program would produce the absolute best health results? Would you follow that regimen, and how guilty would you feel deviating from it? Your FitBit or Apple Watch could track any nonconformity from ideal behavior. Cheeseburgers aren’t health food, but it’d be weird knowing that each one took 4.7 minutes off your life.
In the world of AI development, we’re creating logic machines that behave in delightfully illogical ways – because we want them to. Google’s Duplex demo had very human-like faults in its speech, on purpose. Sort of. Computer-generated music, for the most part, sounds computer generated. But some of it’s pretty good. Translating human languages has always been tricky, but it’s getting better by getting more unpredictable. Second-hand stories about translation algorithms using AI suggest that even the AI developers don’t quite know how the systems work. They read French input (for example) and produce Russian output, but nobody’s quite sure what happens in between. Is this a problem, or is it the whole point?
Personally, I’ve made a career out of being the dumbest guy in the room. Surround yourself with smart people and you’ll inevitably learn a lot of interesting things. Occasionally I can parrot it back. Stupid is as stupid does.
Like yin and yang, ignorance is the flip side of wisdom; you can’t have one without the other. I’m not sure that knowledge is a zero-sum game, though. As we push back the boundaries of ignorance, we’re not necessarily any smarter or wiser. We just learn more about what we don’t know. Knowledge appears to be fractal: there’s always more to explore. But it’s hard to map out a fractal landscape with any accuracy, and it’s hard to know where our new knowledge – and new ignorance – will take us.