As usual, my head is aswirl (like “awash,” but a tad less rambunctious) with cogitations and ruminations about all the once-impossible things I’ve been exposed to recently. I feel like the White Queen in Through the Looking-Glass by Lewis Carroll, who famously said, “Why, sometimes I’ve believed as many as six impossible things before breakfast.” All I can say is that she was an amateur of no account.
Before we leap into the fray with gusto and abandon (and aplomb, of course), let’s first remind ourselves that AI tasks that were once confined to the cloud, with its essentially limitless resources, are currently racing to the edge, where the “internet rubber” meets the “real-world road” (let’s all do our best to not get run over).
On top of this, AI is transmogrifying in ways that are awesome to behold. It’s now common to talk in terms of “Perceptive AI” (segmentation, detection, identification, classification…), “Generative AI” (summarization, reasoning…), and “Agentic AI” (chain of thought, workflow orchestration, agent triggering…).

The evolution of AI-centric edge computing—retail (Source: Hailo)
Actually, there’s an item missing from the Perceptive → Generative → Agentic progression shown above—a missing link in the chain, as it were. Without cheating (looking below), can you guess what it is? This is something many of us neglect, but it can be innocuously introduced (into the workflow) and surprisingly efficacious. I’m talking about “Enhancive AI” as illustrated below.

The evolution of AI-centric edge computing—security (Source: Hailo)
The “Enhancive AI” moniker applies to any AI that enhances or improves some aspect of data, content, or a process, including images, text, audio, video, sensor signals, and systems and workflows.
In the case of security applications, for example, modern AI-based enhancement systems can reduce noise in low-light footage while preserving edges, increase contrast and dynamic range to reveal obscured structure, deblur motion or defocus within physical limits, super-resolve fine structure (e.g., characters on a number plate), and stabilize and fuse multiple frames to recover clearer detail than any single frame contains.
As an aside, I first saw a slightly different incarnation of enhancive AI in action a couple of years ago at one of Intel’s Architecture Days. In that case, it was used on-chip to take a 3D computer game rendered at 1080p and boost it to 4K resolution frame-by-frame in real time. In the old days (say 10 years ago), increasing the resolution of an image was achieved using tried-and-true techniques like interpolation, but these approaches could only guess at missing detail, often yielding something softer, blurrier, and faintly disappointing.
Enhancive AI, by contrast, doesn’t merely stretch pixels—it reconstructs plausible detail based on learned patterns, restoring sharpness, texture, and visual richness in ways that feel almost magical. The result is higher apparent resolution, improved clarity, and better performance, all without the computational burden of rendering every pixel natively. Quietly tucked into the pipeline, enhancive AI delivers outsized benefits, making existing systems look faster, sharper, and altogether more capable than they have any right to be.
As another aside (my mind is merrily meandering along, as is its wont), enhancive AI is also transforming large-format immersive entertainment. In presentations of The Wizard of Oz at the Las Vegas Sphere, for example, AI techniques have been used not only to upscale the original film’s resolution far beyond its historical format, but also to reconstruct and extend portions of the visual scene so that the imagery convincingly fills the Sphere’s vast wraparound display. Rather than merely enlarging existing frames, machine-learning models analyze texture, lighting, perspective, and cinematic context to generate visually coherent detail that blends seamlessly with the restored footage. Although I’ve not seen it myself (I drool in anticipation of one day being so lucky), the result is claimed to be breathtaking.
Speaking of headgear for hardware (we weren’t, but we are now), have you ever wondered about the origin of the words “Shield” in the context of Arduino and “HAT” in the context of Raspberry Pi (RPi)?
Well, early Arduino expansion boards were dubbed “shields” because they stacked neatly on top of the base board, like a protective layer, adding useful features (e.g., sensors, motor control), while visually resembling electronic armor. When the Raspberry Pi community later defined its own add-on ecosystem, it opted for the more cheerfully literal term HAT, short for “Hardware Attached on Top,” which I like to think of as microcontroller boards that take the time to dress properly for the occasion.
The reason I’m waffling on about RPi HATs is that I was just chatting with Max Glover, Chief Revenue Officer (CRO) at Hailo Technologies. It’s widely acknowledged that Max is a delightful fellow, quick-witted, silver-tongued, and outrageously good-looking, but enough about me, we’re here to talk about the other one.
As you may recall from my earlier writings (see Are These the Top-Performing Edge AI Processors??), Hailo is an edge-AI chipmaker focused on delivering data-center-class deep-learning performance within the tight power, cost, and latency constraints of embedded and edge devices. Rather than relying on cloud connectivity, Hailo’s specialized processors enable real-time perception, analysis, and decision-making directly where the data is created—an approach that improves responsiveness, preserves privacy, and dramatically reduces system cost.
The company’s first-generation Hailo-8 AI Accelerator quickly earned a reputation as a remarkably capable and efficient engine for what we might now call “Classical Edge AI” (it’s scary how fast things are progressing, to the extent that we can already refer to this sort of AI as “classic” without laughing), including vision processing, object detection and identification, and the like, delivering impressive performance within extremely modest power envelopes. That device also found its way into the Raspberry Pi ecosystem as the original Raspberry Pi AI HAT+,, providing makers, developers, and product designers with an unusually accessible path to serious edge intelligence.
Now comes the next act. Hailo’s second-generation Hailo-10H AI Accelerator builds on that proven foundation while adding native support for generative AI workloads running entirely on-device, including large language models (LLMs) and vision language models (VLMs), all while maintaining the low latency, efficiency, and privacy advantages that define edge computing.
But wait, there’s more: Max (the other one) was telling me about the recently introduced Raspberry Pi AI HAT+ 2. Based on the Hailo-10H, this effectively places real, local generative AI capability directly on top of a Pi, thereby transforming a familiar single-board computer (SBC) into something that feels far closer to a self-contained intelligent system than a mere development platform. While the original HAT continues to serve classical vision workloads, this spiffy new generation opens the door to fast, local generative pipelines and real-time “voice-to-action” style applications that operate even without internet connectivity.

Meet the RPi AI HAT+ 2 (Source: Hailo)
By bringing efficient, low-latency generative AI directly onto a Raspberry Pi, this new HAT dramatically expands the universe of practical edge applications. Intelligent kiosks and digital signage can shift from static menus and labyrinthine touch-screen hierarchies to natural, conversational interaction. Security and safety systems can move beyond passive monitoring to real-time understanding, summarization, and response. Industrial and retail environments can fuse perception, enhancement, and generation into tightly coupled pipelines that sense what is happening, reason about what it means,
and trigger appropriate actions—all without waiting for the cloud to think things through.
Perhaps most exciting is the way this capability lowers the barrier to experimentation. Because the platform is compact, power-efficient, and rooted in the familiar Raspberry Pi ecosystem, innovators can explore ideas that would previously have demanded far larger budgets, heavier compute, or persistent connectivity. In effect, serious on-device intelligence becomes something you can hold in your hand, prototype on your bench, and ultimately deploy at scale.
For years, AI has primarily lived in distant data centers and has been perceived as being vast, powerful, and impersonal. Devices like the Raspberry Pi AI HAT+ 2 reflect how intelligence is moving outward, toward the physical world, toward the edge, and toward the countless everyday systems that surround us. When perception, enhancement, generation, and (eventually) autonomy can all reside inside small, efficient machines, the boundary between “computer” and “environment” begins to blur.
It’s a thought worth thinking (which is why I thought it) that future historians of technology may look back on this time as the point at which AI stopped being “somewhere else” and started “becoming everywhere.”


