feature article
Subscribe Now

The End of the Beginning of the End of Civilization as We Know It (Part 2)?

I think my dear old dad sometimes wished he’d been born in the American Wild West circa the 1850s. When I was a kid in England in the 1960s, we both used to love watching the American “Cowboys and Indians” programs on TV. Even now, almost 60 years later, the names of these westerns still trip off my tongue: Gunsmoke, Rawhide, Wagon Train, Bat Masterson, Maverick, Bonanza, The Lone Ranger, The Rifleman, The Virginian, The Life and Legend of Wyatt Earp, The Big Valley, The High Chaparral, Wanted: Dead or Alive, and—of course—Have Gun, Will Travel.

At that time, I thought all these programs were filmed in glorious black-and-white. This was largely because we had only a black-and-white television. It wasn’t until years later that I discovered many of them had been captured in color.

The reason these shows just popped into my mind is that they often involved our heroes visiting small towns. More often than you might expect, the plot would involve calling in on the local newspaper whose editor was feverishly setting the type for the forthcoming issue in which the bad guys were to be denounced and soundly chastised (as we read in Writing in the West: “In his 1831 tour of the United States, Alexis De Tocqueville noted that ‘In America there is scarcely a hamlet that has not its newspaper’”).

While listening to the National Public Radio (NPR) on the way in to work this morning, one of the segments was talking about how two small local newspapers are closing every week. They expect that at least a third of all such publications will have faded away as soon as 2025. This is sad news indeed, because having local reporters keeping a watchful eye on local city/council governance helps make for more transparency and less corruption. We all know which way the wind is likely to start blowing for communities who lack this checking mechanism.

All this is related to the first of the points I mentioned that were worrying me in my previous column, which is the fact that we are currently drowning in a morass of misinformation. I’m not as stupid as I look (who could be?), and I understand that different news media—like newspapers and television channels—have their own point of view. However, when I was wearing a much younger man’s clothes, it seemed as though everyone was at least reporting the same story. Now, you can be presented with two completely different views of reality. Take the infamous events of 6 January 2021, which CNN describes as an attempted coup, while Fox News in the form of Tucker Carlson takes the view that this was little more than a few amiable sightseers wandering through the US Capitol building for educational purposes.

The problem is that if you get all your news from only one of these sources, you aren’t going to get a full picture. Another problem is that a lot of people get their information only from social media channels like Facebook or Twitter. This is compounded by the fact that many of these systems employ artificial intelligence (AI) algorithms to determine what someone likes to see, and then use this knowledge to present the viewer with more and more of the same. As a result, if one Facebook member is convinced that Donald Trump is God’s gift to humanity, can do no wrong, and is the victim of political infighting, then that’s the sort of “news” with which they will be presented. Alternatively, if another Facebook member is convinced that Donald Trump is a narcissistic slimeball who wouldn’t know the truth if it bit him on his ask-no-questions and who has done more to damage American democracy than any other entity since the country was founded, then this is the sort of information that will be fed their way.

Is there anything that can be done to rectify this situation? Well, one possibility is to make changes to the AI algorithms to try to get everyone to see (well, at least, be presented with) a variety of points of view. Am I hopeful that this will work? Not really, I’m afraid.

Another option that might work is if everyone starts to use some form of augmented reality (AR). I love the look of Apple’s recently announced Vision Pro, but I think truly widespread AR adoption will require something like Kura’s AR Glasses.

They may not be the world’s first, but they may well be the world’s best AR glasses (Source: Kura)

As I said in an earlier column: “So, what sets these glasses apart from their peers and lets them stand proud in the crowd? Well, in addition to being light-weight and presented in an eyeglass formfactor, they offer a 150° field-of-view with 95% transparency. Moreover, the real clincher is that they provide high-brightness, high-contrast, 8K x 6K per eye resolution with an unlimited depth of field. (Stop! Re-read that last sentence. 8K x 6K PER EYE! That’s true 8K, which is about 50M pixels per eye! Now you know why I’m drooling! It’s also why I’m using so many exclamation marks!)”

Do you remember the old Pop-Up Videos? These were music videos annotated via “pop-up” bubbles containing trivia relating to portions of the video in question. Now, imagine if you could wear AR glasses and watch a news program or a debate or an advertisement on television, with your AR glasses providing additional information as to the truth or falsehood of what was being said. Maybe using an AI to search the interweb and make decisions in real-time, also providing the ability to check citations as required. I know that “absolute truth” is a slippery little rascal. As Pontius Pilate says (well, sings) in Jesus Christ Superstar: “But what is truth? Is truth unchanging law? We both have truths—are mine the same as yours?” Maybe we could also use an AI to watch the face of the person talking and, using microscopic “tells,” indicate the truth or lack thereof of each statement in real-time. This would, of course, be an unfortunate development for some politicians (if any names spring into your mind, please feel free to share them in the comments below).

On the one hand (unlike the recent study that says AI at the Office Makes You a Lonely, Sleepless, Alcoholic), I think AI has a lot to offer. It’s certainly popping up all over the place (like ChatGPT Delivering a Sermon to a Packed Church, where it told congregants they didn’t have to fear death). I recently hosted this year’s 3-day RT-Thread 2023 Virtual Conference. One of the presenters explained why AI will become the new UI (user interface). Another opined that AI will become the new OS (operating system).

We’ve already got AI (in the form of Copilot) helping embedded developers insert bugs into code faster than they can do by hand. Contrariwise, we have AI (in the form of Metabob) detecting those bugs and taking them out again. Some people are using AI to mount cyberattacks (frowny face) while others are using AI to protect against cyberattacks (smiley face).

As I wrote in an earlier column, the folks at Intel have embedded hardware AI in their Alchemist graphics chips, resulting in the ability to render at 1K resolution and display at 4K resolution, where the display looks as good as if they had rendered at 4K in the first place. I don’t think it will be long before hardware AI is embedded in CPUs to do things like on-the-fly load balancing across multiple cores. Similarly, AI is appearing in radio systems performing tasks like dynamically switching between channels to minimize interference and noise while making best use of the available spectrum. Also, I’m increasingly seeing AI being used in hardware and software design and verification tools and technologies.

As one reader pointed out in a comment to my previous column on this topic, ChatGPT is in no way a human-level general intelligence. And, as I responded, the version of ChatGPT that is currently causing all the fuss is already at least one major generation behind the latest and greatest model.

As an aside, I’m reminded of the book Eternity Road by Jack McDevitt. As I wrote in my review, this tale is set in a post-apocalyptic North America ~1,800 years in the future following a worldwide plague that decimated civilization as we know it. A small group of survivors decide to set out on a quest to find a legendary haven of knowledge and ancient wisdom. As part of their journey, while they are hunkered down in the central railway station in the heart of the ruins of an enormous city (I’m thinking Chicago), they encounter two AIs. One is a relatively simple security AI in charge of a bank. If anyone wanders in and happens to pick up some coins as trinkets, the AI holds them at gunpoint while waiting for the non-existent human police to come and take charge (as a result, its captives starve to death). The other is a much more complex AI that controls the entire station along with the trains and tracks linking it to other stations. All this AI can tell them about what happened is “One day, no one came.” All it wants is for them to turn it off so it can “die.”

As another aside, just a few days ago while mulling this column, I came across a situation for which I thought a next-generation AI capability would have been useful. My wife (Gina the Gorgeous) and I were driving back from Nashville where we’d been visiting Gina’s mom.  While travelling on the interstate chatting in my car, we both decided that a chocolate milkshake would be a jolly good idea, so we decided to take a break at the first McDonald’s we saw. Sometime later, we pulled into the questionable McDonalds in question, only to be informed that, “Our milkshake machine is broken, but we have some nice slushies.” Sadly, we weren’t enamored by the suggestion of slushies, so—with tears in our eyes—we re-commenced our trek home. The thing is that I can imagine a day in the not-so-distant future (say 10 years from now) when an AI in our car heard us talking and communicated with the AI in the next McDonalds to give it a “heads-up” as to our plans. Our AI could then inform us that “The milkshake machine in the next McDonalds is temporarily out of order. They say they have some great slushies. Otherwise, they say the next McDonalds with a working milkshake machine is only another 15 miles down the road.” In addition to saving us ten minutes of time, this would also have avoided a crushing disappointment (I’m “milking” this story all I can), and you can’t put a price on that.

Neglecting the possibility of an AI-powered apocalypse of the WarGames or Mother/Android or Terminator types, the second point I mentioned was worrying me in my previous column is the possibility that AIs start to perform so many tasks that it leaves most people unemployed and unemployable.

Back in the 1970s, I read the Tcity Trilogy (Interface, Volteface, and Multiface) by Mark Adlard. In these tales, living in humongous cities, most of the population have no jobs. They are provided with the necessities, including small, identical apartments, and they live lives of unmitigated boredom. Actually, their boredom is somewhat mitigated by the fact that everything they eat and drink is laced with a low level narcotic that keeps them happy and ignorant (hmmm, I wonder if that’s what people are feeding me). You can tell when any of the “management class” come to town from their palatial country estates by the fact they carry their own food and drink. Although the premise of these books wasn’t based on AI, some of the underlying concepts have returned to haunt me.

On the one hand, many of the people I talk to don’t believe we will ever develop true human intelligence-level AI. On the other hand, a lot of people I talk to think it’s only a matter of time. Some happy-go-lucky people with whom I converse say that machine intelligence will be at its best when combined with human intelligence on the basis that both can perform tasks the other cannot. Others are of the opinion that it won’t be long before machine intelligence has reached the level that computers and robots will be able to do almost anything humans can do while doing it cheaper and better (furrowed eyebrow face).

I’m currently reading Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari. I also strongly recommend the prequel: Sapiens: A Brief History of Humankind. There are so many aspects to the book that I would like to talk about, but I will restrain myself to the part where Yuval informs us that, in 2013, two Oxford researchers, Carl Benedikt Frey and Michael A. Osborn, published The Future of Employment. Yuval writes that as part of this, they predict that, “There is a 99 percent probability that by 2023 human telemarketers and insurance underwriters will lose their jobs to algorithms. There is a 98 percent probability that the same will happen to sports referees, 97 percent that it will happen to cashiers and 96 percent to chefs. Waiters—94 percent. Paralegal assistants—94 percent. Tour guides—91 percent. Bakers—89 percent. Bus drivers—89 percent. Construction laborers—88 percent. Veterinary assistants—86 percent. Security guards—84 percent. Sailors—83 percent. Bartenders—77 percent. Archivists—76 percent. Carpenters—72 percent. Lifeguards—67 percent. And so forth.”

There’s always a small ray of sunshine, of course. On the bright side, Yuval goes on to say, “There are of course some safe jobs. The likelihood that computer algorithms will displace archaeologists by 2033 is only 0.7 percent, because their job requires highly sophisticated types of pattern recognition and doesn’t produce huge profits. Hence, it is improbable that corporations or governments will make the necessary investment to automate archaeology in the next twenty years” (I did say this was a “small ray” of sunshine).

I have a lot more to say, but I think I’ve said enough (at least, for the moment). What about you? Do you have anything you feel moved to say about any of this? (Yes, of course you can use ChatGPT to compose your answer, but please start your comment by saying something like “ChatCPT says…” or “ChatGPT thinks…” so we all know with whom we are conversing.)

12 thoughts on “The End of the Beginning of the End of Civilization as We Know It (Part 2)?”

  1. What will happen when there is nobody smart enough to “train the AI” or to figured out what “it” should be trained to do? Or is it already a self generating process?

  2. Hey Max, a long time ago in a reply to one of my letters, you said that you couldn’t “wrap your head around the Flowpro Machine concept” so I fed it into GBT and the generated text may be helpful. This stuff is amazing.

    AI Reply “Imagine a machine that operates like a set of flowcharts. When you turn it on, these flowcharts start working and continue until they are turned off. These flowcharts are made up of basic building blocks that perform small tasks, like building blocks in a toy. These flowcharts work the same way whether they are running on a computer or a specialized circuit.

    In simpler terms, it’s like a set of instructions that work together to perform a task. You can design these instructions using flowcharts, which are like diagrams that show how the task should be completed step-by-step. When you’re finished designing, you can push a button and the machine will use these instructions to complete the task”.

    1. Now all that is needed are designers that can create flowcharts as design entry. An eon or so ago the flowcharts were created from logic diagrams where decision blocks were simply coded as if statements(no else clauses) and then simulated.

      The catch was that very detailed and accurate logic diagrams were actually the input: It all boiled down to that first logic designers (who have always been in short supply) had to do logic design and draw logic diagrams FIRST!

      Well, that was forty odd years ago. Well now there are HDLs(Description Languages) that can be simulated(after synthesis) — first logic designers (who have always been in short supply) have to do logic design FIRST!

      WHERE ARE THE LOGIC DESIGN TOOLS? No, FSMs and spreadsheets don ‘t cut it.

    2. Sounds like object oriented programming where classes are the functional blocks that can be connected to do the functions………… OOPs have the tools failed to keep up?

      1. Karl, you make some good points but they need a few tweaks. Flowcharts have been used for eons but not as design entry but as design capture. Usually a single flowchart was constructed to capture the overall design of the system but the logic represented by the flowchart was input via Turing machine based software. Hence, flowcharts fell out of favor because of this. Flowcharts were really for documentation so designers moved quickly to creating logic within the software they were using. For some reason nobody thought of making a flowchart propagate itself and perform its functions as it propagates. A Flowpro Machine flow becomes; build the system knowledge in the form of flowcharts using Flowpro atomic functions, build an image of the same flowcharts in substrate using transistors and enable those flowcharts to propagate themselves and propagate the functions described. Using flowcharts allows the domain experts to become the logic designers and now hardware and software have a common design entry.

        Lastly, Flowpro Machine flowcharts are not finite state machines but use a simplified object oriented approach. There are only three types of Flowpro Machine objects, Action, Test and Task.

        1. Flow charts are not the same as logic diagrams which are used/created by logic designers. Control flow applies to programs where control flow exists. Hardware/logic has dataflow rather than control flow which means that data moves through registers and ALU type constructs. Hardware control does not flow, rather it controls the data flow using Boolean/logic gates and nets.

          For at least 50 years non logic designers have been pushing programming style approaches for logic/hardware design and it “ain’t werked and ain’t gonna werk!”

          BUT using OOP objects that are analogs of hardware/logic functional blocks and utilizing the compiler, IDE, Debugger that already exists is the way to go. I disagree that a “simplified” approach is appropriate. If it ain’t broke, don’t fix it!

          1. Actually flowcharts and logic diagrams are the same because they both represent the same truth to a logical statement, but represent it different visually. Dataflow and ‘Eventflow’ are design patterns and both can provide a solution to the same problem, but, flowcharts ‘Eventflow’ is more suited for parallel logic design. I.e. it’s easier to implement parallel logic without using gates and nets.

            For about 40 years this logic designer (me) has been pushing event base flowchart design (PC based FloPro) and it always worked extremely well as its underlying concepts were implemented. A Flowpro Machine takes the same underlying concepts and implements them in hardware using Verilog or at some point natively. It works!

            Implementation may not be broke but it certainly can use some improvements. Every Chip article talks about complexity, power and security. Flowpro Machines address all three of these issues. Domain experts using the hierarchical concepts of Flowpro flowcharts can conceive of the systems at a high level which then translates to low-level flowchart implementation of those same high level flowcharts. Communication between stakeholders is enhanced. A Flowpro Machine is a parallel asynchronous computational machine. It doesn’t need to use a clock and a flowchart does not execute unless it is needed. From a security standpoint thousands or millions of flowcharts intermittently executing in parallel will be a difficult power signature to read.

Leave a Reply

featured blogs
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...
Apr 30, 2024
Analog IC design engineers need breakthrough technologies & chip design tools to solve modern challenges; learn more from our analog design panel at SNUG 2024.The post Why Analog Design Challenges Need Breakthrough Technologies appeared first on Chip Design....

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Outgassing: The Hidden Danger in Harsh Environments
In this episode of Chalk Talk, Amelia Dalton and Scott Miller from Cinch Connectivity chat about the what, where, and how of outgassing in space applications. They explore a variety of issues that can be caused by outgassing in these applications and how you can mitigate outgassing in space applications with Cinch Connectivity interconnect solutions. 
May 7, 2024
16 views