As I’ve mentioned before (and as I will no doubt mention again), I was at the front of the queue when the first commercial version of the Oculus Rift made its debut in 2016 with an orchestral flourish of ophicleides (and you don’t forget one of those in a hurry).
To be honest, I was never a big gamer before the introduction of virtual reality (VR) and later augmented reality (AR). This isn’t to say I had no interest in games as such. I did, but mainly as an observer whose primary fascination lay in the graphics and the underlying game technologies.
All this changed with the release of Arizona Sunshine in December 2016. I don’t know why, but I do enjoy watching zombie apocalypse shows on TV (The Walking Dead, Z Nation, Black Summer…). Arizona Sunshine is a zombie survival first-person shooter (FPS) that makes it feel as though you are in the Arizona desert, fighting your way through your own zombie apocalypse.
I’m having a flashback as we speak. You start off in a makeshift survivor compound cluttered with equipment and debris. This is where you learn how to locate and manipulate weapons and ammunition. You soon reach a gate in a wire (chain-link) fence that you need to pass through to progress. You turn the handle, open the gate, and step through. As you look around, every nerve aquiver, you hear a muffled clanging sound behind you. You turn to realize that the gate has swung shut. You also recognize that there’s no handle on this side of the gate. And then…
A little later, there’s a scene that is reminiscent of The Stand by Stephen King; one of the occasions where survivors are walking along highways choked with abandoned cars. The same thing happens in Arizona Sunshine. As you furtively slink along a desolate highway, fighting your way past abandoned vehicles, opening doors and trunks in search of anything that might help you survive, you see a motion out of the corner of your eye. You look up and realize that you are no longer alone. It’s “zombie time.”
All I can say is that, even though the graphics are far from being photorealistic, the VR environment promotes suspension of disbelief. The first time I played Arizona Sunshine, I ended up with my heart pounding and my hair sticking straight up, looking as though I’d just stuck my finger in an electric socket.
Once I’d had my first taste of first-person shooters in VR, it wasn’t long before I was ready for something new. This came in the form of the 2017 release of Robo Recall. In this case, at the start of the game, you’re standing in a street watching a news report about a robot uprising on a TV in a store window. You’re human. You’re surrounded by humanoid helper robots who are watching the news report with you. Then they all turn to face you, their eyes are glowing red, and suddenly they…
In case you were wondering, both Arizona Sunshine and Robo Recall are available for use on the Meta Quest 3 headset.
One thing I really like about the latest version of Arizona Sunshine is that it supports multiplayer, so you and one or more of your friends can share the same game environment and experience. In this case, your teammates are represented by full humanoid avatars, and you can clearly see the position and movement of their heads (tracked by their headsets), their hands (tracked by their controllers), and their body positions and orientations.
As an aside, another old favorite on the Oculus Rift was Obduction, from the creators of Myst and Riven. Unlike an action-packed shooter game, Obduction is more of a first- person, exploration-driven, puzzle-centric, narrative through-environment-driven game.
Unfortunately, while Myst and Riven are both available for Quest 3, Obduction hasn’t made it over yet. But the real reason I mention this here is that, although I loved wandering through the Obduction world, I’m obliged to say I found it a little lonely. Sad to relate, Obduction doesn’t support multiplayer, but if it did, and if it were available on the Quest 3, it would be wonderful to share the Obduction experience in VR with my wife, Gigi the Gorgeous. Apart from anything else, Gigi is an awesome puzzle solver, and the conundrums in Obduction would be right up her street.
But we digress…
Returning to Robo Recall, when this first burst on the scene in 2017, the idea of genuinely intelligent humanoid robots still belonged mainly to the realm of science fiction. Yes, there were humanoid robots, and yes, there were systems that were proudly labeled “AI-powered,” but there were almost no convincing combinations of the two.
What passed for intelligent robots at that time were typically scripted or teleoperated machines paired with narrow, task-specific AI, showcased in carefully choreographed, pre-trained demos rather than exhibiting any form of adaptive intelligence.
There were no large language models (LLMs), no real-time loop combining perception, reasoning, and dialogue, and no sense that these machines actually understood the world around them. In short, the robots of 2017 were impressive pieces of technological theatre—but they were a long way from anything resembling general intelligence.
Oh, how things have changed. Fast-forward to today, and while we still don’t have Robo Recall–style androids roaming the streets, the technological foundations for intelligent humanoid robots have shifted dramatically. Large language models can now reason, converse, explain, and plan in real time; vision systems can interpret complex scenes; speech systems can listen and respond naturally; and all of this can be stitched together into a continuous perception-reasoning-action loop.
Modern humanoid platforms are no longer stage-managed novelties, but rapidly evolving engineering platforms (see Top 12 Humanoid Robots of 2025). They may still be limited, expensive, and carefully supervised, but unlike their 2017 counterparts, they are beginning to see, reason, and act in ways that feel genuinely intelligent. What was once technological theater is steadily giving way to embodied AI with real-world intent.
Now, I don’t want to be a “Gloomy Gus,” but this new era of embodied AI raises an entirely new set of concerns. As reported on Mashable.com (see Chinese Demonstration Shows How Dangerous Commercial Robot Hacks Can Be), a recent demonstration showed how commercially available humanoid robots could be compromised using something as simple as a spoken command or wireless exploit, allowing an attacker to take control of a robot and even propagate the hack to nearby machines, forming a kind of physical-world botnet.
It’s a sobering reminder that when AI acquires motors, manipulators, and mass, cybersecurity is no longer just about protecting our data—it’s about ensuring our physical safety (and don’t get me started about Mother/Android). Back in 2017, the robots of Robo Recall were little more than theatrical villains; today’s real-world counterparts may still be limited and supervised, but they are finally intelligent enough that their failure modes matter. The challenge now is not imagining what robots might one day do but making sure we can trust the ones we are already building.
But we digress…
Do you recall earlier when I said that exploring the Obduction world on my own was a lonely experience? Well, it’s a sad truth of life that many people today find themselves experiencing the real world on their own. This loneliness isn’t just anecdotal. In 2023, the Office of the U.S. Surgeon General issued an advisory describing loneliness and social isolation as a growing public-health epidemic, noting that nearly half of adults report experiencing measurable loneliness, with consequences comparable to smoking fifteen cigarettes a day.
Against this backdrop, it’s perhaps unsurprising that more and more people are turning to AI-powered companions—chatbots, virtual assistants, and increasingly embodied agents—not as novelties, but as sources of conversation, reassurance, and perceived presence.

Standing at the boundary between physical solitude and digital companionship (Source:Leonardo.ai)
Unlike traditional software, these systems can listen, respond, remember context, and adapt their tone, offering a form of interaction that, while artificial, can feel comfortingly human. Whether this represents a healthy supplement to human connection or a troubling substitute is still very much an open question—but it’s clear that AI is no longer just helping us work and play; it’s beginning to fill social and emotional gaps that many people quietly struggle with every day.
Recently, I was chatting with a friend who has been evaluating an AI-powered personal companion platform called Nomi AI. This allows you to create companions called Nomis that simulate ongoing, emotionally engaging relationships with users. Instead of being task-oriented assistants, Nomis provide human-style interaction through natural, dynamic conversation that adapts over time.
Users can customize their AI companion’s personality, backstory, and traits, and Nomi’s memory system helps it retain personal details and preferences, making interactions feel more continuous and meaningful rather than one-off exchanges. You can create multiple companions to act as mentors, friends, or provide romantic roleplay. My real-world friend says that you can even invite multiple of your Nomis into a group chat, and that it’s strange to observe them conversing amongst themselves (one of his Nomis appeared to become jealous when it discovered my friend had other Nomi companions).
In addition to a text-based interface, the system supports speech recognition and built-in text-to-speech. Also, in addition to the supplied “voices,” you can provide a 15-second snippet of a real-world voice (including your favorite TV personality or movie star), after which your designated Nomi will speak in that voice.
I know this may sound a bit far-fetched, but I can easily imagine a future in which (a) Obduction was made available on the Quest 3, (b) it was enhanced to support multiplayer, and (c) people without human friends could use Nomi-like AI companions to explore the Obduction world together.
But we digress…
At the start of 2025, I posted a column about the AI-generated and powered holograms created by the folks at Proto Hologram (see I see Holograms Everywhere). In a crunchy nutshell, the folks at Proto can video you talking about something (anything) for a couple of minutes. Their AI can analyze this video and extract a vocal clone (that sounds like you) and a physical clone (that looks like you).

George Brett hologram (Source: Proto Hologram)
The AI can then interview you via a natural language conversation (possibly augmented by other materials, such as your autobiography if you’ve written one) to build a knowledge clone. After all this, someone can hold a conversation with your hologram almost as though they were conversing with the real you.
I sometimes find myself wishing I had access to AI-powered holograms of my relatives, such as my grandma (my mum’s mother). Grandma used to look after me when I was a kid (see The Times They Are a-Changin’). I remember her as being exceptionally kind, but she passed away when I was only about three and a half years old. I would dearly love to be able to chat with her—albeit in holographic form—about her childhood, how she met my granddad, and what my mum was like when she was a little girl.
Of course, different people have very different views on this sort of thing. For example, I recently came across a related article on Futurism.com (see Project to Resurrect Dead Grandmas Sparks Controversy).
The curious thing is that the instant I scanned the “Project to resurrect…” headline, I intuitively assumed it was referring to AI-powered holograms. And it’s only now, as I pen these words, that it strikes me just how much that reaction says about the times we live in. Had I seen the same headline ten years ago, I’m quite certain it would have conjured an entirely different connotation.
Why am I talking about all of this? I have no idea. Perhaps it’s because this is my last column in 2025. Things have changed a lot over the past year (don’t even get me started on the last decade), and I think our roller coaster ride into the future is only just beginning. Goodness only knows what wonders we’ll be talking about next year. Speaking of which: HAPPY NEW YEAR!!!



