I must admit that my head is “all over the place” today. I’m trying to juggle too many things at once, and the problem is that I can’t juggle. Well, that’s not strictly true. I can juggle 10 fine porcelain plates, but only for a very short period of time.
Speaking of porcelain (did you see what I just did there), my mum was born in 1930. She insists that her family wasn’t poor (“It’s just that we didn’t have a lot of money”). She also tells me they had only one cold-water tap (faucet) serving the house, and even that was mounted outside the main building. Their toilet was in a small, unlit brick outhouse at the far end of the yard, and their tin bath was brought up from the basement (i.e., the coal cellar) once a week for the family to bathe. Mum says they had fireplaces in every room, but they could afford only enough coal for the family room, which doubled as their kitchen. Apparently, the rest of the house was so cold that ice formed on the insides of the bedroom windows during the winter months. You can say what you want, but that sounds pretty poor to me.
Most years, all the kids in the area received only an orange and a “thrupenny bit” (a small coin worth three old English pennies) for Christmas. I don’t know how this came to pass, but when my mum was 5 years old, she woke up on Christmas morning to find that her orange and “thrupenny bit” were accompanied by a small porcelain doll. This was a rudimentary affair—mum’s mum (my grandma) had made its clothes—but my mum thought it was the most wonderful doll in the world. Unfortunately, later that day, her parents told her to let her 2-year-old sister, Shirley, play with the doll for a while.
Mum reluctantly handed it over, and Shirley whirled it around, accidentally smashing it into pieces.
This broke my mum’s heart. She’ll be 96 years old this year, and she’s still talking about it, so now it’s breaking my heart, too. If only I could get my time machine up and working, this is one of the things I’d go back and sort out. The reason I mention this here is that my wife (Gigi the Gorgeous) had an idea—she purchased a porcelain doll and sent it to my mum for this Christmas that’s just passed. My mum says she’s named the doll Claire Louise, and it has pride of place next to the TV in her front room. Mum and I FaceTime every morning, and Claire Louise always sends her kindest regards.
But that’s not what I wanted to talk about…
Recently, I was chatting with Shashank Bangalore Lakshman, who is a Graduate Student (M.S. in Artificial Intelligence) at the University of Texas at Austin. One of the topics in our wide-ranging conversation was Shashank’s open-source (MIT License), vibe- coded “PixelSynth” project.
On the off chance you were wondering, “vibe-coded” is an informal, fairly recent slang term used—especially in software and creative tech circles—to describe something that was built by feel, intuition, or aesthetic sense rather than by strict engineering discipline or formal specification. Think of it as the coding equivalent of “It feels right.”
Bearing this in mind, PixelSynth is a creative coding tool that uses Python to procedurally compile and generate interactive p5.js (JavaScript) sketches for live webcam manipulation. These can be used by indie artists to create fun music videos on a zero budget!
Three of the myriad effects available are shown below. Hexagonal Mosaic samples colors into a honeycomb grid; Circle Halftone maps pixel brightness to the diameter of black circles on a white grid; and Bad Cable randomly drops the sync signal, causing the image to roll or shear horizontally.

Hexagonal Mosaic (Source: PixelSynth/Shashank Bangalore Lakshman)

Circle Halftone (Source: PixelSynth/Shashank Bangalore Lakshman)

Bad Cable (Source: PixelSynth/Shashank Bangalore Lakshman)
I think this is amazingly cool, but that’s also not what I wanted to talk about…
The generative AI boom has sparked unprecedented demand for large-scale compute. This comes with a surging appetite for electricity, placing a real strain on existing infrastructure. Modern AI-centric data centers are far more power-hungry than traditional facilities, consuming dramatically more electricity per rack while generating intense cooling loads alongside their compute demands.
According to researchers and energy analysts, global data center power use—already equivalent to the electricity consumption of a sizeable industrialized nation—is on pace to more than double by 2030, driven chiefly by AI workloads that require specialized processors and constant operation.
Utilities and grid operators are already feeling the pain. In some regions, the need for steady electricity to fuel AI data centers has pushed older, dirtier, “peaker” power plants back into service to handle sudden demand spikes, and policymakers are debating whether AI power users should pay higher rates to safeguard residential supply. At the same time, major players like Meta are lining up gigawatts of dedicated generation capacity—including nuclear power deals—just to keep their AI infrastructure humming without overburdening local grids.
This backdrop matters because, unlike the classic narratives featuring “faster chips” or “denser GPUs,” the real bottleneck in AI scaling may well be power availability and how to use it smartly—especially in environments where raw watts are expensive, scarce, or both.
This is where Hammerhead AI comes into play. I recently had an interesting conversation with Rahul Kar, Hammerhead’s Founder and CEO. Prior to Hammerhead, Rahul was COO of AutoGrid Systems, a Stanford spin-out founded in 2013 that lived at the uncomfortable intersection of electric power, software, and real-world constraints.
AutoGrid didn’t generate electricity and it didn’t build hardware; instead, it built software that treated electricity demand itself as a controllable asset. At its peak, AutoGrid’s platform was coordinating roughly 8 GW of critical power assets across 12 countries, including—at one point—about 16% of France’s total ancillary grid services being bid and transacted through its systems.
In practical terms, AutoGrid helped utilities answer questions like: “Which flexible loads can be throttled back right now?” and “Where can power be released or absorbed without breaking service level agreements (SLAs)?” and “How do we trade ‘flexibility’ rather than raw electrons in energy markets?”
The key concept underlying AutoGrid’s offering is the gap between peak vs. average demand. The idea is that power systems are built for worst-case demand. Most of the time, they operate far below that peak, and the unused headroom is valuable… if you can coordinate it safely. That experience matters because Hammerhead is essentially replaying the same playbook, but inside data centers instead of electrical grids.
Hammerhead isn’t trying to invent a better GPU, accelerator, or processor. What it aims to do is turn underutilized power capacity into revenue-generating AI compute—without violating existing service guarantees. The underlying reality is that modern data centers waste enormous amounts of power capacity, even while the AI industry claims it can’t get enough compute.
Most data centers are provisioned for the “worst imaginable day”—the one when all workloads hit simultaneously, ambient temperatures are high, cooling systems are stressed, and SLAs must still be met. In reality, average utilization is often 10–30% of peak capacity. The remaining margin isn’t “spare servers” so much as “unused raw watts sitting behind the meter.”
Hammerhead’s insight is that power is the real bottleneck, not silicon. If you can temporarily access the unused power headroom, then you can run interruptible, latency- tolerant AI workloads without jeopardizing premium customers. This is not about moving electricity around; it’s about moving compute to where the power already exists.
One interesting point pertains to where the folks at Hammerhead operate. They are not starting inside Google-only or Microsoft-only hyperscale facilities (although they may well end up there). Instead, they’re targeting co-location data centers, which are the “neutral hotels” of the compute world.
In these facilities, customers buy contracted power capacity (MW), not guaranteed energy usage. They bring their own servers, and they pay a premium for the right to surge to full capacity on demand. Today, if Customer A pays for 10 MW but uses only 2 MW, the remaining 8 MW sits idle. Hammerhead’s proposal is to let Customer B (running AI workloads) use that idle capacity at a lower price, with the understanding they must instantly back off if Customer A ramps up. The reason this works is that many AI workloads (transcription, fine-tuning, batch inference…) are delay-tolerant (a job that finishes in 45 minutes instead of 30 is still useful).
This is where the worldview shift happens. Traditional compute efficiency is measured by asking, “How many FLOPS do I get per watt?” But the AI economy now asks, “How many tokens can I generate per watt-hour?” This reframing matters. Hammerhead isn’t optimizing processors—it’s optimizing economic yield from fixed power budgets.
In this sense, Hammerhead sits above things like chip design, memory hierarchies, and optical interconnects. Rather, it’s a meta-layer that orchestrates when and where AI workloads run based on power availability, not silicon novelty.
Whether Hammerhead AI ultimately proves to be a quiet enabler or a genuine inflection point remains to be seen. For now, it sits in that familiar liminal zone between “obvious in hindsight” and “sounds easier than it probably is” (which is why Rahul’s role in AutoGrid Systems’ success is so compelling).
There are real engineering, commercial, and trust boundaries to cross before underutilized watts can be reliably turned into revenue-generating tokens at scale. That said, the underlying premise that power, not silicon, is becoming the scarcest resource in AI is hard to dismiss. If the next wave of innovation comes not from faster chips but from smarter orchestration of the infrastructure we already have, then Hammerhead may be poised to address a problem the rest of the industry is only just beginning to recognize.


