feature article
Subscribe Now

Dipping Our Toes Deeper into the AI+AR Computing Waters

I’ve said this until I’m blue in the face, but no one seems to want to listen: As I discussed in one of my “What the FAQ” columns (see What the FAQ are VR, MR, AR, DR, AV, and HR?), augmented reality (AR) is but a subset of mixed reality (MR), which also embraces diminished reality (DR), virtual reality (VR), and augmented virtuality (AV). Sad to relate, it’s the AR term that’s impinged on the collective consciousness, so that’s what we’ll have to run with for now.

What do we mean when we say “AI+AR computing”? Well, one way to look at this is that for most of computing’s history, our relationship with these machines has been defined by distance. First, we had mainframe monsters occupying entire bays accessed via remote terminals. Then we had bulky desktop computers sitting on our desks, followed by laptop computers perched on our knobbly knees. More recently, we have mobile devices that lurk in our pockets or sit in the palms of our hands.

The next phase, often described as AI+AR computing, involves blending artificial intelligence with augmented reality so that digital information seamlessly merges with the physical world, allowing us to interact with computing in far more natural and immersive ways.

Another relevant term in this context is “spatial computing.” This refers to a style of computing in which digital information is placed, understood, and interacted with in the physical 3D world around us, rather than being confined to flat screens. Spatial computing combines sensing of the real environment (cameras, depth sensors, AI perception), understanding of space and objects (mapping, tracking, semantics), rendering of digital content positioned in real-world coordinates, and interaction via natural actions such as looking, talking, moving, or gesturing. The result is a computing experience that feels embedded in reality, not separate from it.

In this new world, instead of us having to stare at dashboards or displays, information would live precisely where it is needed—anchored to the real world. Heads-up displays, holographic walls, volumetric projections, and, most compellingly, lightweight smart glasses are all steppingstones toward this spatial form of computing.

It can be difficult to wrap one’s head around this sort of thing, but I have a couple of examples I often use when introducing these technologies to people who aren’t familiar with them, like my dear old mum. Example #1: Suppose you are walking through a part of the city that’s unknown to you. Instead of pulling your smartphone from your pocket to check directions, imagine following guidance arrows that appear gently floating in space ahead of you. Example #2: I read a lot of books. I have them piled on shelves, desks, and the floor. I can imagine muttering to myself in the not-so-distant future, “What was that book I was reading about six months ago that mentioned Lady Ada in the context of AI?” And my AI+AR headset replying, “That was ’27 Algorithms That Changed the World.” And me then saying, “Hmmm, where did I leave that little rascal?” And my AI+AR headset responding, “It’s on the bookshelf to your right,” while also drawing arrows in the air and highlighting the book in question on the shelf.

If this future sounds tantalizingly close, that’s because many companies have already tried to build it. Unfortunately, today’s AI+AR headsets reveal just how hard the problem really is. Current systems tend to be bulky, power-hungry, expensive, and socially awkward.

Todays AI+AR displays are less than ideal (Source: Pixabay)

To be honest, social awkwardness is perhaps the least of our problems. If you went to your local supermarket and saw a single customer strolling through life sporting one of these headsets, you might be tempted to think disparaging thoughts. However, if you returned to the same store a few months later to find that everyone was wearing these headsets except you, you wouldn’t worry what they looked like; instead, you’d start to wonder what information, like special offers, they were accessing that was unavailable to you.

Perhaps the biggest problem with existing headsets is that they are not well adapted to how human vision actually works. Many are not truly spatial; they present images at fixed focal distances, and they can even induce discomfort or nausea during extended use.

Long-term dreamers sometimes talk about AI+AR contact lenses. Perhaps one day. But in the foreseeable future, the most intuitive and socially acceptable solution is far more familiar: ordinary-looking eyeglasses.

To succeed, such glasses must be lightweight, thin, compatible with prescription lenses, and—above all—comfortable for all-day wear. That means delivering binocular augmented reality with realistic depth cues and eliminating the notorious vergence-accommodation conflict that plagues many existing displays. Achieving all of this simultaneously is an extraordinarily tall order. Which brings us to the fact that I was just chatting with Mike Noonen, who is the CEO of Swave Photonics.

The vision (no pun intended) for AI+AR smart glasses (Source: Swave)

Swave isn’t trying to make existing displays a bit thinner, a tad brighter, or marginally less nausea-inducing. Instead, the company has taken the far more audacious route of rethinking the display problem from first principles—right down to the level of individual photons.

Working in collaboration with imec, the folks at Swave have developed what they describe as: “The first true holographic color display built using nano-scale pixels fabricated in a standard CMOS foundry process” (Phew!) The key idea is deceptively simple: if you want to place realistic three-dimensional imagery into the real world, you must be able to steer light itself, not merely shine tiny colored dots toward the eye.

Achieving that requires pixels dramatically smaller than those used in conventional display technologies—small enough to manipulate light at the scale of its wavelength.

Mike tells me that true holography requires a pixel pitch that’s 1/2 the wavelength of the light being displayed, which means existing display technologies are an order of magnitude too large.

The magic point is a pixel pitch lt;= 300nm (Source: Swave)

To address this, Swave and imec have invented a next-generation display technology that uses phase change materials for its pixels and is implemented using a low-cost CMOS foundry process. The resulting Holographic eXtended Reality (HXR) technology supports true holographic color with its <300nm pitch nano-pixels.

A photonics breakthrough with CMOS economics (Source: Swave)

At the heart of Swave’s approach lies a spatial light modulator containing roughly a quarter of a billion individually addressable nano-pixels on a die only a few millimeters across. Since the glasses employ separate modulators for each eye, that’s a total of half a billion pixels, which is more than sufficient to make me squeal with excitement.

Each pixel incorporates a phase-change material that can switch between amorphous and crystalline states, altering its optical properties and allowing the device to sculpt incoming light into precise diffraction patterns, which are the raw ingredients of a hologram. Each pixel’s phase-change material transitions between amorphous and crystalline states in roughly ~200 ns, which is far faster than the human eye can resolve and therefore easily supports real-time image refresh.

Illuminated by red, green, and blue laser sources, this sea of nano-pixels can generate dynamic holographic images that appear not just in front of the user, but at different depths in space—from arm’s length to optical infinity—creating a viewing experience that finally behaves the way human vision expects the world to behave.

How it works (Source: Swave)

The history of immersive displays is littered with heroic prototypes that never quite made the leap into daily life. The difference here is that Swave’s technology is explicitly aimed at the one form factor the world has already agreed to wear on its face without complaint: ordinary eyeglasses.

Because holography can place imagery at natural focal distances—and because the phase-change pixels are inherently energy-efficient—smart glasses based on this approach could, in principle, weigh less than 50 grams, operate for an entire day on a single charge, support prescription lenses, and avoid the bulky waveguides that have constrained many previous designs.

Should that come to pass, the implications extend far beyond reading text messages in mid-air. The same holographic engine could enable truly spatial heads-up displays in automobiles, projection-mapped interfaces for robots and physical-AI systems, and entirely new ways for humans and machines to share visual information.

Last, but certainly not least, I’m delighted to report that this is no longer a “science-fair demonstration.” Mike tells me that Swave is already placing display chips in the hands of early customers, an unmistakable sign that the journey from laboratory curiosity to commercial reality is now well underway.

I think it’s time for the big finish (takes a deep breath): The history of computing is a story of distance collapsing—room-sized machines to desktops, desktops to laps, laps to pockets, and pockets to palms. The combination of AI+AR and spatial computing represents the next step, where the boundary between the digital and physical worlds begins to dissolve.

While Swave’s holographic approach doesn’t guarantee that future, it certainly makes it feel suddenly, startlingly plausible. And if we truly are standing at the threshold of spatial computing, then future generations may look back on glowing handheld screens the same way we now regard rotary telephones: ingenious for their time, indispensable in their day… and ultimately just a steppingstone to something far more natural.

 

Leave a Reply

featured blogs
Feb 6, 2026
In which we meet a super-sized Arduino Uno that is making me drool with desire....

featured chalk talk

Simplifying Position Control with Advanced Stepper Motor Driver
In this episode of Chalk Talk, Jiri Keprda from STMicroelectronics and Amelia Dalton explore the benefits of the powerSTEP01 is a system-in-package from STMicroelectronics. They also examine how this solution can streamline overall position control architecture, the high level commands included in this solution and the variety of advanced diagnostics included in the powerSTEP01 system-in-package.
Jan 21, 2025
31,019 views