I think my dear old dad sometimes wished he’d been born in the American Wild West circa the 1850s. When I was a kid in England in the 1960s, we both used to love watching the American “Cowboys and Indians” programs on TV. Even now, almost 60 years later, the names of these westerns still trip off my tongue: Gunsmoke, Rawhide, Wagon Train, Bat Masterson, Maverick, Bonanza, The Lone Ranger, The Rifleman, The Virginian, The Life and Legend of Wyatt Earp, The Big Valley, The High Chaparral, Wanted: Dead or Alive, and—of course—Have Gun, Will Travel.
At that time, I thought all these programs were filmed in glorious black-and-white. This was largely because we had only a black-and-white television. It wasn’t until years later that I discovered many of them had been captured in color.
The reason these shows just popped into my mind is that they often involved our heroes visiting small towns. More often than you might expect, the plot would involve calling in on the local newspaper whose editor was feverishly setting the type for the forthcoming issue in which the bad guys were to be denounced and soundly chastised (as we read in Writing in the West: “In his 1831 tour of the United States, Alexis De Tocqueville noted that ‘In America there is scarcely a hamlet that has not its newspaper’”).
While listening to the National Public Radio (NPR) on the way in to work this morning, one of the segments was talking about how two small local newspapers are closing every week. They expect that at least a third of all such publications will have faded away as soon as 2025. This is sad news indeed, because having local reporters keeping a watchful eye on local city/council governance helps make for more transparency and less corruption. We all know which way the wind is likely to start blowing for communities who lack this checking mechanism.
All this is related to the first of the points I mentioned that were worrying me in my previous column, which is the fact that we are currently drowning in a morass of misinformation. I’m not as stupid as I look (who could be?), and I understand that different news media—like newspapers and television channels—have their own point of view. However, when I was wearing a much younger man’s clothes, it seemed as though everyone was at least reporting the same story. Now, you can be presented with two completely different views of reality. Take the infamous events of 6 January 2021, which CNN describes as an attempted coup, while Fox News in the form of Tucker Carlson takes the view that this was little more than a few amiable sightseers wandering through the US Capitol building for educational purposes.
The problem is that if you get all your news from only one of these sources, you aren’t going to get a full picture. Another problem is that a lot of people get their information only from social media channels like Facebook or Twitter. This is compounded by the fact that many of these systems employ artificial intelligence (AI) algorithms to determine what someone likes to see, and then use this knowledge to present the viewer with more and more of the same. As a result, if one Facebook member is convinced that Donald Trump is God’s gift to humanity, can do no wrong, and is the victim of political infighting, then that’s the sort of “news” with which they will be presented. Alternatively, if another Facebook member is convinced that Donald Trump is a narcissistic slimeball who wouldn’t know the truth if it bit him on his ask-no-questions and who has done more to damage American democracy than any other entity since the country was founded, then this is the sort of information that will be fed their way.
Is there anything that can be done to rectify this situation? Well, one possibility is to make changes to the AI algorithms to try to get everyone to see (well, at least, be presented with) a variety of points of view. Am I hopeful that this will work? Not really, I’m afraid.
Another option that might work is if everyone starts to use some form of augmented reality (AR). I love the look of Apple’s recently announced Vision Pro, but I think truly widespread AR adoption will require something like Kura’s AR Glasses.
They may not be the world’s first, but they may well be the world’s best AR glasses (Source: Kura)
As I said in an earlier column: “So, what sets these glasses apart from their peers and lets them stand proud in the crowd? Well, in addition to being light-weight and presented in an eyeglass formfactor, they offer a 150° field-of-view with 95% transparency. Moreover, the real clincher is that they provide high-brightness, high-contrast, 8K x 6K per eye resolution with an unlimited depth of field. (Stop! Re-read that last sentence. 8K x 6K PER EYE! That’s true 8K, which is about 50M pixels per eye! Now you know why I’m drooling! It’s also why I’m using so many exclamation marks!)”
Do you remember the old Pop-Up Videos? These were music videos annotated via “pop-up” bubbles containing trivia relating to portions of the video in question. Now, imagine if you could wear AR glasses and watch a news program or a debate or an advertisement on television, with your AR glasses providing additional information as to the truth or falsehood of what was being said. Maybe using an AI to search the interweb and make decisions in real-time, also providing the ability to check citations as required. I know that “absolute truth” is a slippery little rascal. As Pontius Pilate says (well, sings) in Jesus Christ Superstar: “But what is truth? Is truth unchanging law? We both have truths—are mine the same as yours?” Maybe we could also use an AI to watch the face of the person talking and, using microscopic “tells,” indicate the truth or lack thereof of each statement in real-time. This would, of course, be an unfortunate development for some politicians (if any names spring into your mind, please feel free to share them in the comments below).
On the one hand (unlike the recent study that says AI at the Office Makes You a Lonely, Sleepless, Alcoholic), I think AI has a lot to offer. It’s certainly popping up all over the place (like ChatGPT Delivering a Sermon to a Packed Church, where it told congregants they didn’t have to fear death). I recently hosted this year’s 3-day RT-Thread 2023 Virtual Conference. One of the presenters explained why AI will become the new UI (user interface). Another opined that AI will become the new OS (operating system).
We’ve already got AI (in the form of Copilot) helping embedded developers insert bugs into code faster than they can do by hand. Contrariwise, we have AI (in the form of Metabob) detecting those bugs and taking them out again. Some people are using AI to mount cyberattacks (frowny face) while others are using AI to protect against cyberattacks (smiley face).
As I wrote in an earlier column, the folks at Intel have embedded hardware AI in their Alchemist graphics chips, resulting in the ability to render at 1K resolution and display at 4K resolution, where the display looks as good as if they had rendered at 4K in the first place. I don’t think it will be long before hardware AI is embedded in CPUs to do things like on-the-fly load balancing across multiple cores. Similarly, AI is appearing in radio systems performing tasks like dynamically switching between channels to minimize interference and noise while making best use of the available spectrum. Also, I’m increasingly seeing AI being used in hardware and software design and verification tools and technologies.
As one reader pointed out in a comment to my previous column on this topic, ChatGPT is in no way a human-level general intelligence. And, as I responded, the version of ChatGPT that is currently causing all the fuss is already at least one major generation behind the latest and greatest model.
As an aside, I’m reminded of the book Eternity Road by Jack McDevitt. As I wrote in my review, this tale is set in a post-apocalyptic North America ~1,800 years in the future following a worldwide plague that decimated civilization as we know it. A small group of survivors decide to set out on a quest to find a legendary haven of knowledge and ancient wisdom. As part of their journey, while they are hunkered down in the central railway station in the heart of the ruins of an enormous city (I’m thinking Chicago), they encounter two AIs. One is a relatively simple security AI in charge of a bank. If anyone wanders in and happens to pick up some coins as trinkets, the AI holds them at gunpoint while waiting for the non-existent human police to come and take charge (as a result, its captives starve to death). The other is a much more complex AI that controls the entire station along with the trains and tracks linking it to other stations. All this AI can tell them about what happened is “One day, no one came.” All it wants is for them to turn it off so it can “die.”
As another aside, just a few days ago while mulling this column, I came across a situation for which I thought a next-generation AI capability would have been useful. My wife (Gina the Gorgeous) and I were driving back from Nashville where we’d been visiting Gina’s mom. While travelling on the interstate chatting in my car, we both decided that a chocolate milkshake would be a jolly good idea, so we decided to take a break at the first McDonald’s we saw. Sometime later, we pulled into the questionable McDonalds in question, only to be informed that, “Our milkshake machine is broken, but we have some nice slushies.” Sadly, we weren’t enamored by the suggestion of slushies, so—with tears in our eyes—we re-commenced our trek home. The thing is that I can imagine a day in the not-so-distant future (say 10 years from now) when an AI in our car heard us talking and communicated with the AI in the next McDonalds to give it a “heads-up” as to our plans. Our AI could then inform us that “The milkshake machine in the next McDonalds is temporarily out of order. They say they have some great slushies. Otherwise, they say the next McDonalds with a working milkshake machine is only another 15 miles down the road.” In addition to saving us ten minutes of time, this would also have avoided a crushing disappointment (I’m “milking” this story all I can), and you can’t put a price on that.
Neglecting the possibility of an AI-powered apocalypse of the WarGames or Mother/Android or Terminator types, the second point I mentioned was worrying me in my previous column is the possibility that AIs start to perform so many tasks that it leaves most people unemployed and unemployable.
Back in the 1970s, I read the Tcity Trilogy (Interface, Volteface, and Multiface) by Mark Adlard. In these tales, living in humongous cities, most of the population have no jobs. They are provided with the necessities, including small, identical apartments, and they live lives of unmitigated boredom. Actually, their boredom is somewhat mitigated by the fact that everything they eat and drink is laced with a low level narcotic that keeps them happy and ignorant (hmmm, I wonder if that’s what people are feeding me). You can tell when any of the “management class” come to town from their palatial country estates by the fact they carry their own food and drink. Although the premise of these books wasn’t based on AI, some of the underlying concepts have returned to haunt me.
On the one hand, many of the people I talk to don’t believe we will ever develop true human intelligence-level AI. On the other hand, a lot of people I talk to think it’s only a matter of time. Some happy-go-lucky people with whom I converse say that machine intelligence will be at its best when combined with human intelligence on the basis that both can perform tasks the other cannot. Others are of the opinion that it won’t be long before machine intelligence has reached the level that computers and robots will be able to do almost anything humans can do while doing it cheaper and better (furrowed eyebrow face).
I’m currently reading Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari. I also strongly recommend the prequel: Sapiens: A Brief History of Humankind. There are so many aspects to the book that I would like to talk about, but I will restrain myself to the part where Yuval informs us that, in 2013, two Oxford researchers, Carl Benedikt Frey and Michael A. Osborn, published The Future of Employment. Yuval writes that as part of this, they predict that, “There is a 99 percent probability that by 2023 human telemarketers and insurance underwriters will lose their jobs to algorithms. There is a 98 percent probability that the same will happen to sports referees, 97 percent that it will happen to cashiers and 96 percent to chefs. Waiters—94 percent. Paralegal assistants—94 percent. Tour guides—91 percent. Bakers—89 percent. Bus drivers—89 percent. Construction laborers—88 percent. Veterinary assistants—86 percent. Security guards—84 percent. Sailors—83 percent. Bartenders—77 percent. Archivists—76 percent. Carpenters—72 percent. Lifeguards—67 percent. And so forth.”
There’s always a small ray of sunshine, of course. On the bright side, Yuval goes on to say, “There are of course some safe jobs. The likelihood that computer algorithms will displace archaeologists by 2033 is only 0.7 percent, because their job requires highly sophisticated types of pattern recognition and doesn’t produce huge profits. Hence, it is improbable that corporations or governments will make the necessary investment to automate archaeology in the next twenty years” (I did say this was a “small ray” of sunshine).
I have a lot more to say, but I think I’ve said enough (at least, for the moment). What about you? Do you have anything you feel moved to say about any of this? (Yes, of course you can use ChatGPT to compose your answer, but please start your comment by saying something like “ChatCPT says…” or “ChatGPT thinks…” so we all know with whom we are conversing.)