feature article
Subscribe Now

Do You Want To Be An AI Plumber?

Increasingly, I find myself talking to people who say something like, “We have a 1-, 2-, 4-, or 8-core proof-of-concept AI chip that can—in the future—be scaled up to 1,000 cores.” But this is the first time I’ve been told, “We have a real-world 1,000-core AI chip that can—in the future—be scaled down to 1-, 2-, 4-, or 8-cores.”

Before we delve deeper into the fray with gusto and abandon, I’d like us all to cast our minds back to ye olden days when I wore a younger man’s clothes.

Many of today’s younger electronic-systems developers tend to think of “open source” as a relatively modern invention, something that arrived alongside GitHub, Linux, and the maker movement. If the truth be told, however, the concepts of open source and open systems date back to the earliest days of computing in the 1950s and 1960s, when software was freely shared among academic and corporate researchers. The term “open source” as a “thing” wasn’t coined until 1998.

By the end of the 1960s, the computing world was dominated by heavyweight mainframe and minicomputer operating systems such as IBM’s OS/360, Honeywell’s GCOS, and the early PDP-series systems. The trouble was that these environments were proprietary, incompatible, and utterly unwilling to talk to each other. Each vendor built its own hardware, its own operating system, its own compilers, its own utilities, and, effectively, its own universe.

Software wasn’t portable. Tools couldn’t be shared. Programs written for one system were useless on another. And because these operating systems were massive, monolithic, and tightly controlled by corporations, experimenting with them or evolving them was almost impossible.

The arrival of UNIX in 1969 changed all that. It arrived with a radically more straightforward philosophy: small tools, composable interfaces, a clean filesystem design, and—most importantly—a portable implementation in C. For the first time, an operating system could move across hardware platforms, be modified by researchers, and evolve as a community effort.

But UNIX had its own issues. It was fragmented across vendors, each of whom proudly took the “standard” system and modified it until it barely resembled the original. Every flavor—SunOS, HP-UX, AIX, IRIX, Xenix, and more—was almost UNIX… but not quite. The resulting landscape was vibrant but messy; portability was promised but seldom delivered.

Then along came Linux. This began in 1991 when Linus Torvalds, a 21-year-old Finnish student, wrote a small Unix-like kernel “just for fun” on his 386 PC. He released it under the GNU General Public License, allowing anyone to study, modify, and improve it. The open-source community quickly piled in, pairing Linus’s kernel with the GNU userland tools to form a complete operating system. Throughout the 1990s and 2000s, Linux spread from hobbyist desktops to servers, supercomputers, smartphones (Android), embedded systems, and cloud datacenters. Today, Linux is everywhere, from IoT widgets to the world’s fastest supercomputers, powered by millions of contributors worldwide.

As the Linux ecosystem flourished, something else arose: the Linux Plumbers Conference (LPC). The name was not chosen lightly. These “software plumbers” are the engineers who make the system’s unseen infrastructure work. They’re the “pipe fitters” of the digital world. They worry about schedulers, drivers, memory models, and kernel boundaries. They make everything else possible. Over time, the LPC became the place where deep infrastructure engineers gathered to hash out the low-level details that keep the modern world running.

Today, a new frontier is emerging that desperately needs its own plumbers. Yes, of course, we’re talking about the world of AI in all its multifarious forms, including Generative AI and Agentic AI.

Over the past few years, AI has exploded across every domain, including vision, language, reasoning, coding, and even the creation of disturbingly realistic photos and videos of cats wearing monocles. But almost all this progress has come wrapped in proprietary silicon, stacks, and tooling. Want to innovate at the hardware level? Tough. Want to build your own inference engine with full visibility? Good luck. Want to bypass CUDA to build something truly new? Get in line.

But turn that frown upside down into a smile, because things are not quite as grim as they seem. The reason I say this is that I was just chatting with Tanya Dadasheva and Roman Shaposhnik, who (along with Doug Smith) are co-founders of Ainekko.

Ainekko’s mission is nothing less than bringing open-source principles all the way down to the raw silicon. As part of this, Ainekko recently acquired all of the intellectual property from Esperanto Technologies, which was the company that built one of the most ambitious RISC-V chips ever. This bodacious beast of an inference engine boasts over a thousand 64-bit RISC-V cores, all operating below 50 watts and coolable with nothing more exotic than a chunk of passive metal.

This chip wasn’t a toy or a prototype. It taped out at TSMC 7 nm four years ago and is fully functional today. In fact, Ainekko has hundreds of them; enough to hand out to interested open-source developers.

Every core is a 64-bit RV64 general-purpose processor with SIMD/vector extensions developed before the official RISC-V vector (RVV) standard was finalized. Several RISC-V engineers have privately told Ainekko that these extensions are better suited to small edge-class devices than is the overly heavy RVV standard.

The architecture of this device (well, the naming convention its designers chose to use) is delightfully whimsical:

  • Cores are called minions
  • 8 cores form a neighborhood
  • 4 neighborhoods form a shire (yes, like Tolkien)
  • Shires tile to form chips of 1,000+ minions

But beneath the whimsy lies a serious engineering ethos: open, general-purpose compute with flexible dataflow, NUMA-aware shire interconnects, and a network-on-chip (NoC) now being rearchitected as a fully open standard called NEMI, which some industry experts describe as potentially “industry changing.”

A key feature of this architecture is that it isn’t merely scalable in the marketing sense. It can be scaled downward to microcontroller-class devices and upward to multi-thousand-core inference engines. The same basic building blocks can power anything from a single-core edge sensor to a drone, a robot, or a cluster-class AI accelerator.

To further their mission, the folks at Ainekko have launched AI Foundry. This is an open-source community initiative that releases production-grade RTL, system emulators, software stacks, and developer tooling. They are pushing everything from chip-level building blocks to inference frameworks into the public domain for collaboration and remixing. As part of this, they’ve also open-sourced their system-level emulator, called CISAMU, which they say is far more robust than existing open options like Spike.

Ainekko and AI Foundry’s guiding principles include:

  • Open everything down to RTL.
  • Give developers the freedom to hack, profile, break, and improve.
  • Avoid vendor lock-in by supporting open-source chip-design tooling (including collaborations with Zero ASIC and Andreas Olofsson’s chiplet/open-tooling work; see Is This the Future of Chiplet-Based Design?).
  • Enable community-driven architectures that evolve faster than proprietary ones.
  • Provide services, not chips (Ainekko makes money by helping companies size, design, and tape out custom systems, not by selling silicon).

We can think of their business model as “Sun Microsystems meets Red Hat meets the RISC-V ethos.”

AI today has no equivalent to what the Linux Plumbers do for operating systems. There’s no gathering place for the engineers who care about things like chip-level dataflow, instruction-set extensions, inference kernels, open NoC design, emulator infrastructure, quantization strategies, long-term governance… basically the boring (but absolutely essential) bits.

Ainekko’s founders believe that AI infrastructure has now matured to the point where the bottleneck is not the algorithms but the plumbing, and that this sort of plumbing is best done in the open. Thus, as part of AI Foundry, they are kicking off an open-source initiative called AI Plumbers. This is explicitly modeled after the LPC but focused on AI hardware and system infrastructure. Not the flashy stuff. Not the headline-grabbing breakthroughs. But rather the essential, invisible, deeply technical substrate that will lay the foundation for the coming years of AI computing.

All this arrives at a perfect historical moment. AI is undergoing the same transition UNIX underwent in the early 90s: from fragmented, proprietary islands to the need for a common, open substrate.

UNIX gave rise to Linux, Linux gave rise to the Linux Plumbers, and the Linux Plumbers made modern computing possible. AI has grown to the point where it needs its version of all three. Ainekko is proposing an open AI substrate (the RTL), an open AI operating environment (the stack), and an open AI systems community, namely the AI Plumbers.

This is not a company announcement. This is an ideological statement. They’re not building a product. They’re building a movement (“…in four-part harmony, and stuff like that”).

So, if you are a developer, a researcher, a systems architect, a chip designer, or someone who simply believes that AI infrastructure should not be owned by a handful of corporations, now is the time to get involved.

AI Foundry has already gone live with RTL, emulators, tools, and documentation. The company is working with the Linux Foundation on governance. They are inviting early contributors to help shape what may become the CNCF of AI hardware. And they are actively seeding the community with real, working 1,000-core hardware.

If the open-source silicon revolution is going to happen, Ainekko may well be the group to ignite it. So, sharpen your tools, roll up your sleeves, and prepare to join the first wave of AI Plumbers (am I the only one who thinks “The Rise of the AI Plumbers” would make an awesome name for an opera?).

Leave a Reply

featured blogs
Jan 20, 2026
Long foretold by science-fiction writers, surveillance-driven technologies now decide not just what we see'”but what we pay....

featured chalk talk

Bluetooth Channel Sounding
In this episode of Chalk Talk, Joel Kauppo from Nordic Semiconductor and Amelia Dalton explore the principles behind Bluetooth channel sounding, the differences between different channel sounding device types, and how Nordic Semiconductor’s high-performance, ultra-low-power Bluetooth SoC with integrated multi-purpose MCU and nRF Connect SDK v3.0.1 can get your next Bluetooth channel sounding design up and running in no time!
Jan 21, 2026
9,636 views