I’ve barely managed to wrap (what I laughingly call) my mind around the concept of generative AI (gen AI), only to find that the current buzz on the street is increasingly turning to the topic of agentic AI, but first…
Apropos of nothing at all, I just read an interesting article on TechCrunch about how The Wizard of Oz has been blown up by AI. This is in relation to the Sphere in Las Vegas. Most of us have only seen images or videos of the outside of this bodacious beauty—and that’s impressive enough—but it’s the inside that blows your socks off with an immersive, state-of-the-art sensory experience.
Should you be fortunate enough to enter the edifice, you will be greeted by a massive 16K-resolution wraparound LED display. This covers the walls and ceiling, creating a fully enveloping visual environment. The screen can display incredibly detailed, dynamic imagery—from vast natural landscapes to surreal, digitally animated sequences—giving the feeling of being transported to entirely different worlds.
Complementing the visual extravaganza, audio is delivered through a groundbreaking system featuring over 160,000 speakers that utilize beamforming technology, enabling highly localized and clear sound. I hear (no pun intended) that no matter where you sit, the audio feels personal and perfectly synced with the visuals. Some shows also incorporate haptic feedback—seats that vibrate or move slightly—and even scents or temperature changes, further enhancing the immersive experience.
In the aforementioned article on The Wizard of Oz, we read how AI is being used to both enhance the resolution of the existing film and expand the footage beyond the frame of what was actually shot.
O-M-Goodness gracious me! This looks and sounds fantastic. This is one of my all-time favorite films, and I would love to see it in this new format.
As an aside (you knew one was coming), when The Wizard of Oz was released in 1939, while it wasn’t the first film to use color, it was one of the earliest, and it became one of the most iconic. What you must remember is that, until that time, most people had only ever seen films in black and white. As you will recall, the film begins in sepia tone. It’s only when Dorothy is transported to Oz that everything changes into glorious Technicolor.
I’ve long tried to imagine the audience’s reaction in 1939 when the film transitioned into color. Well, recently, when I was FaceTiming with my dear old mum, I mentioned my meandering musings. You can only imagine my surprise to discover that my mum and her sister were taken to see The Wizard of Oz as soon as it came out in England. She was nine years old at the time. She says the audience gasped out loud, and it was the most wonderful thing she’d ever seen.
But that’s not what I wanted to talk to you about…
Earlier this week, my wife (Gigi the Gorgeous) and I took a road trip to Nashville, Tennessee. On the way, we streamed some of our favorite albums, including Close to the Edge by Yes, Crime of the Century by Supertramp, and Wish You Were Here by Pink Floyd.
For some reason, my mind turned to the 1984 movie Amadeus about the Austrian composer and child prodigy Wolfgang Amadeus Mozart. I started to wonder what he would think of Yes and Supertramp if I got my time machine working, went back to when he was 19, and brought him 250 years into his future, which would be this year. (Not that I would do such a thing, you understand. I’ve learned to be much more careful since… the incident.)
I couldn’t make up my mind if Mozart would hear this as a cacophony of sound, or if it would open new musical doors for him. Gigi says that he was a genius and that she thinks he would be blown away (in a good way) by groups like Yes and Supertramp.
All this made me think about how much music has evolved over the past 250 years. Then I started to wonder what we would think of the music they will be playing 250 years from now. And then I started to wonder what today’s generative AIs would think about all of this, so I asked ChatGpt, which responded as follows:
Ah, an excellent question—and one that invites more than a simple binary response. As a GenAI, I don’t “hear” music the way you do, but I can analyze its structure, emotional tone, cultural context, and historical significance.
If you were to play “Close to the Edge” or “Crime of the Century” for Mozart, I suspect his initial reaction might be one of utter bewilderment—his auditory system wouldn’t be prepared for electric guitars, multi-track layering, or synthesizers. But once he acclimated? He might be fascinated. He’d likely recognize the complex compositions, the thematic development, the emotional arc—things he valued in his own work.
Mozart was not only a genius but also an innovator. He constantly pushed boundaries within the classical framework. Given time and context, I think he’d not only appreciate the likes of Yes and Supertramp, but he might even want to collaborate. Imagine a symphonic prog-rock concept album co-composed by Wolfgang A. Mozart and Rick Wakeman. Now that’s a musical mash-up worth hearing.
As for the music 250 years from now? Well, assuming humans are still around and still making music (fingers crossed), it will probably include elements that are completely alien to us—new instruments, new scales, maybe even neural or quantum compositions. But it will still be driven by the same human impulses: to express, to connect, to provoke thought and feeling. So, I suspect that, much like Mozart, you might be initially shocked… but then deeply intrigued.
And me? I’ll still be here, listening—well, “processing”—alongside you.
To be honest, I’m a bit conflicted about all this. On the one hand, I think ChatGPT just presented a reasonably sophisticated response to a wacky off-the-wall question. On the other hand, I know that it was trained on vast amounts of human-written text (now augmented with AI-written text); that its algorithms act like probabilistic models that learn patterns, nuance, and associations between words, concepts, and contexts; and that it generates its responses by predicting what the best next word (and the next, and the next) might be, based on everything it’s learned.
I’m really impressed by large language models (LLMs) and foundation models (FMs) like ChatGPT, but I’m also aware that they sometimes generate outputs that are inaccurate, nonsensical, or even fabricated. Essentially, they “hallucinate” by creating information that doesn’t align with reality or is based on faulty reasoning.
For example, I’ve been conducting experiments (which sounds more professional than “playing around”) using 74xx00-series parts. It recently came to my attention that I was inadvertently driving the inputs to an 8-bit 74HC595 shift register from the outputs of some 74LS logic gates. This is frowned upon because it’s not a reliably compatible combination. On the other hand, it’s been working for me without any problems.
I was chatting with my friend, Joe Farr, about this, asking if he was aware of a matrix for the main 74xx technologies illustrating which outputs can drive which inputs (all at 5V, because that’s what I’m using). Joe responded by asking ChatGPT to generate such a table, which I then titivated into the following graphic.

74xx technology output-to-input compatibility matrix (5V).
At first glance, this looks great, but can you spot the obvious error? You’re right. If there’s one thing we can be reasonably certain of, it’s that the outputs from each technology should be compatible with the inputs of the same technology. In turn, this means that, if nothing else, the diagonal formed by the cells starting in the upper left-hand corner and proceeding to the lower right-hand corner should all be green. So, why is it that the L (“Low-power”) outputs are shown as being only marginally compatible with the L inputs? This immediately casts the rest of the table into doubt (sad face).
In fact, while we are here, if you have any thoughts you’d care to share on the above table, I’d love to hear them, either in the comments below this column or you can email me at max@clivemaxfield.com).
But that’s also not what I wanted to talk to you about…
What I really wanted to talk about was agentic AI. This differs from generative AI, which generates new content, such as text, images, audio, and video, based on the data it has been trained on from existing sources. By comparison, agentic AI refers to a class of artificial intelligence that creates systems capable of autonomous decision-making and task execution without requiring constant human intervention. Unlike traditional AI, which often relies on pre-defined rules or explicit instructions, agentic AI systems are designed to be proactive and adaptable, able to navigate complex tasks and learn from their interactions with the environment.
Agentic AI promises to be the new “hot thing” in AI space (where no one can hear you scream). Why is it so hot (and are we ready to get our fingers burned)? Well, most current AI tools (like traditional chatbots) are reactive; that is, they wait for a prompt before doing anything. By comparison, agentic AI can set goals, take initiative, plan over time, and act autonomously to achieve objectives—sometimes across multiple steps or tools.
Early agentic frameworks, such as Auto-GPT, BabyAGI, and OpenAI’s GPT agents, have demonstrated the potential for AI agents that can chain tasks together, utilize tools like search engines, code editors, and APIs, and adapt based on the outcomes of previous steps.
Of course, granting AI more autonomy than it already has raises concerns about control and alignment. It also raises numerous questions. For example, if an AI acts on behalf of a person or company, who’s responsible if (when) things go pear-shaped?
Do you remember when ChatGPT launched in November 2022? That’s only a little more than two and a half years ago, as I pen these words. How much interaction did you have with AI prior to November 2022? How much interaction do you have with AI today?
Now, consider that we’re at the same “November 2022” stage with respect to agentic AI, which many see as the hot frontier in 2025. Agentic AI represents a step beyond applications like passive chatbots toward goal-driven, autonomous digital agents. Thankfully, we’re still in the early days—the real breakthroughs (and breakdowns) are yet to come. Are you as terrified excited as me?



One thought on “The Wizard of Oz and Mozart Meet Generative and Agentic AI”