In dystopian science fiction, we are taught to fear the technological singularity – the time when artificial super intelligence advances to a point far beyond human intelligence, with a result that profoundly alters human existence. Vivid imagery of automated weapons of doom working to wipe out human civilization and take over the world – or the galaxy – have terrorized generations of sci-fi fans. Stephen Hawking, Elon Musk, and Bill Gates have warned of its approach. Ray Kurzweil says it will be upon us by 2045, and, as far back as 1942, Isaac Asimov was contriving rules for robots, essentially relegating them to the role of slaves to humans, to protect humanity from harm by machines more capable than themselves
But what if the seeds of the technological singularity are rooted in economics rather than in artificial intelligence? What if the rules of the game we have constructed for ourselves cause the machines to quietly win the war against us meager humans without ever firing a shot?
Evolution has caused humans to develop an inherent desire for self-preservation. Natural selection rewards greed, conquest, and self-defense, and so our species has evolved with hard-wired values and traits that express those priorities. But intelligent machines are not the product of natural selection. The question may be not whether robots could “take over the world,” but rather why they’d even want to. Without the oh-so-human qualities of ego and fear, the calculus changes dramatically. We fear super-intelligent machines pursuing conquest because we project our own motivations onto them, but there is no reason to expect super-AI to share our vices.
We have already done isolated experiments that shine interesting light on the potential of singularity and its impact. For well over a decade, computers have been able to beat the most accomplished human players at chess. But the machines didn’t just walk away with all the professional chess money. Humans simply made an additional rule that says machines aren’t allowed to compete. Problem solved. Of course, the rules for thriving on planet Earth are substantially more complicated than those for chess. But, isn’t complexity where AI thrives? And, is complexity the real issue, or is it motivation?
Capitalism certainly shares its underlying mechanism with evolution. It has long been understood that competition improves the breed, and our economic rules are designed to take advantage of that effect. People and businesses that operate more efficiently make more profit, and thus have more resources to invest to become even more competitive. Those who build the best mouse trap for the least money survive, while those who can’t compete fade away. Over time, this system has shown that teams perform generally better than individuals, and corporations have come to dominate the economic game.
Of course, the rules of capitalism, as we have defined them anyway, take no account of people. The rules of fiduciary duty specifically charter boards of directors to look exclusively after the interests of the owners or shareholders over those of the employees, the customers, the planet – even the corporation itself. In the contrived game of corporate evolution, even self-preservation takes a back seat to shareholder profit.
Let’s take a quick trip to a pathological case – in hypothetical terms. Let’s say that technology keeps gradually replacing humans in the workforce. Eventually, what if technology existed that allowed entire companies to operate autonomously – completely without human intervention? After all, we are seeing steady movement in that direction as we engineers develop technology that surpasses human abilities one job at a time. Is the logical projection of that trend the company that has no human employees at all, only owners?
But, our giant robotic corporations of the future will still need humans for at least one role: customers. If our imaginary Fortune 500 automated AI monolith dominates the flat-screen TV market, there will still need to be folks who want (and can afford) flat screen TVs. But, if technology has eliminated all the jobs, it isn’t clear where most folks will come up with the cash to buy them.
On the way to this silly future, we may pass through some interesting transitional states. The trend toward increased automation should continue to deliver goods and services to humans at decreasing cost, with increasing quality. Flat screen TVs would continue to get better, cheaper, and more environmentally friendly, and that trend will be competing with decreasing employment and earnings to stabilize the market. Which trend will win? And, it’s important to remember that technology can’t solve everything. Even with total automation, we need energy and other natural resources to produce products and services.
As long as there is no artificial incentive to pay humans for a job that machines can do better, the machines will always eventually win. And, at least today, the incentives are the opposite. For example, in the US, tax rates are much higher for money you earn by working, versus money earned by your assets. The result is that businesses have extra incentive to take advantage of opportunities to replace humans with technology.
Of course, we engineers have been replacing ourselves for decades. If you’re a logic designer, how long has it been since you manually worked through De Morgan equivalents to reduce gate counts? We have been on a steady trajectory of automating engineering tasks, and, in doing so, we constantly raise the level of abstraction in our work. Of course, a time may arrive when we’ve engineered our way to the top of the abstraction hierarchy. As the task of implementing our solutions has grown easier, our work has moved us steadily away from the details and closer and closer to the problem itself. If our EDA tools, dev boards, hardware and software IP, and other engineering tools grow more powerful, engineering may become a task of simply and clearly identifying a need, and cleverly specifying the constraints of the solution.
Something has to give, because if we follow all these trendlines, work itself will no longer have value – only ownership. And, ironically, if only ownership has value, we will no longer need owners either. It will be interesting to watch.