Every once in a while, I run across a story that’s awesome in its simplicity, whilst still managing to be mindbogglingly complex “under the hood,” as it were. But fear not, because I’m sure I’ll manage to make things confusing if I try (and even if I don’t).
This is a story in three parts. Let’s start with the folks at XMOS and their XCORE devices. XMOS was founded in 2005 in Bristol, England. The name XMOS is a nod to Inmos, which was the pioneering UK company behind the Transputer.
“What’s a Transputer?” I hear you say. Well, in the late 1970s and early 1980s, most processors offered only a single core, a single hardware thread, and a single instruction stream. Examples are the Intel 8080 and 8086, the Zilog Z80, the Motorola 6800 and 68000, the MOS 6502, and the National 32016. All of these executed one instruction at a time, from one program counter, with no hardware context switching and no simultaneous execution of independent instruction streams. If you wanted concurrency, you faked it in software using interrupts, cooperative multitasking, and preemptive time- slicing (on bigger systems).
Before you start shouting at me and gesticulating furiously, I’ll freely admit that multiprocessor systems did exist in those days of yore. Mainframes and minicomputers could have multiple CPUs on a backplane, shared memory, and operating system (OS)- managed scheduling. However, each CPU was still single-core, communication was slow and complex, and scaling was expensive and non-linear. This was system-level multiprocessing, not chip-level multicore.
Admittedly, Cray and CDC machines had deep pipelines, could issue vector operations, and appeared to run in parallel from the outside. But they still had a single instruction stream, they didn’t have multiple independent cores, and they didn’t support concurrent threads in the modern sense.
The Transputer flipped all this on its head. It supported multiple hardware-scheduled processes on a single chip; context switches were extremely fast (hardware-assisted, no OS tricks), and programs were written assuming many things happen at once. Each Transputer had dedicated serial links offering point-to-point, deterministic, low-latency. There was no shared bus and no arbitration mess. Transputers could be wired into networks: lines, rings, meshes, hypercubes, etc. This meant that parallel computers could be built by simply plugging processors together.
I think it’s fair to say that the Transputer introduced the concept of scalability before “scaling” was fashionable. You didn’t upgrade a Transputer-based system by buying a faster CPU. Instead, you added more Transputers, recompiled, and let the concurrency scale. This concept, performance through replication, is now fundamental to Multicore CPUs, GPUs, AI accelerators, and data centers. The Transputer anticipated all this by 30+ years.
Despite its brilliance, the Transputer struggled commercially and eventually bit the dust. But turn that frown upside down into a smile, because all was not lost. David May, one of the Transputer architects, later co-founded XMOS, and XCORE devices reuse many of the same core ideas, including deterministic timing, hardware-scheduled threads, message-passing over shared state, and software-defined I/O. In many respects, XCORE devices are essentially the Transputer re-imagined for real-time embedded systems.
What makes xCORE so powerful is its determinism. Hardware-scheduled threads, predictable instruction timing, and software-defined I/O allow external events to be detected and serviced with cycle-accurate precision—often within tens of nanoseconds—without the jitter, latency, or uncertainty of conventional interrupt-driven systems. By contrast, ARM Cortex-M devices, for example, typically rely on interrupts with latencies measured in dozens to hundreds of clock cycles, and timing can vary depending on system state.
In 2017, XMOS acquired Setem Technologies, enhancing its capabilities in voice separation and advanced audio algorithms, thereby strengthening its leadership in voice and audio processing solutions. I remember one of the things that blew my socks off (which may explain my current penchant for elasticated footwear) was their ability to solve the “cocktail problem” in which multiple people are talking at the same time. They could disassemble the soundscape into individual voices—and this was in 2017, which is essentially the same as saying “The Dark Ages” today.
Sad to relate, all is not “milk and honey.” XCORE’s deterministic, concurrency-first programming model comes with a learning curve, especially for engineers steeped in conventional ARM-plus-interrupt thinking. Once teams make that investment, they tend to become converts—xCORE has shipped on the order of 40 million devices to date—so the real challenge for XMOS has long been persuading busy engineers to invest the time to learn something completely (well, fundamentally) different.
And now for something completely different, as Monty Python might have said. Part 2 of our tale involves something I’ve been working on with a friend (well, he’s doing most of the work whilst I busy myself making pointless suggestions, which means we’re both playing to our strengths).
I don’t want to give too much away because this is ultra-top-secret. Suffice it to say that we’ve come up with something awesomely clever involving teeny-tiny microcontrollers passing packets of data back and forth like circus jugglers. The nitty-gritty low-level details are so convoluted as to make a grizzled old engineer cry (not me, you understand… I just have something in my eye). So, we’ve also created a simple scripting language and an associated virtual machine. This is accompanied by a PC application that allows users to capture their scripts for upload to the microcontrollers. And part of this involves defining the formats for packets to be transmitted and those expected to be received. (I hope I haven’t provided you with enough information for you to replicate when we’ve done.)
Regarding building these packets, we started creating a book of examples along the lines of, “If you want to do XXX, then you have to do YYY.” The problem is that the packet specification itself has nothing to do with us. It was created by a committee comprising multiple companies that gives the impression of never actually talking to each other.
Our solution is to use Generative AI (GenAI). Now, when creating their scripts, users can simply use natural language to describe the high-level packet functionality they envisage, and the tool will automatically generate the packet for them (excluding any data conveyed by the packet, where such data is generated at runtime).
And this is where we bring everything back home with the concluding piece to this column. I was just chatting with Mark Lippett, President and CEO of XMOS. Mark was regaling me with details pertaining to their recently introduced Generative SoC (GenSoC).
My knee-jerk reaction was that this was an SoC capable of running generative AI workloads. Well, although XCORE in general (and XCORE AI in particular) can execute small neural networks and perform feature extraction, pre-processing, post-processing, machine learning (ML) inference, control, timing, and orchestration around AI models (phew!), that’s NOT what we are talking about here.
At its heart, GenSoC is about changing the starting point of embedded-system design. Instead of beginning with registers, peripherals, and scheduling minutiae, you begin by stating intent. You can say something as simple as, “Generate me a stereo audio DSP pipeline with compression,” and the GenSoC tools will assemble a complete design, producing a block diagram, mapping the functions onto XCORE resources, and generating deployable source code behind the scenes. What would traditionally take days or weeks of careful architectural work can now happen in minutes.
Crucially, this isn’t a one-shot trick. GenSoC supports iterative refinement in natural language. If you decide the design needs more functionality, you can simply say, “Add reverb, “ or “Add a noise-reduction stage,” or “Change this to a two-channel output,” and the system will update the design accordingly. The AI isn’t just emitting code; it’s continuously reasoning about timing, resource usage, and data flow as the system evolves.

GenSoC: Designing SoCs using GenAI (Source: XMOS)
What makes GenSoC especially compelling is that natural language and graphical design are fully synchronized. You’re not locked into a chat window. You can drop down to a block diagram view, manually tweak connections or components, and GenSoC immediately understands what you’ve changed. Those edits propagate back into the underlying system description, so the AI, the diagram, and the generated code are always looking at the same design state. In effect, the block diagram and the language prompt become two views of the same truth.
This bidirectional flow matters because real engineers don’t design in a straight line (I know I don’t—I can barely walk in a straight line). Sometimes it’s faster to describe a change in words; sometimes it’s easier to grab a block and move it. GenSoC accommodates both modes of interaction, without forcing you to “re-explain” your design to the tool every time you make a change.
Plenty of platforms can generate code. What makes GenSoC special is that its use with XCORE devices guarantees not only functional correctness but also timing correctness. The blocks GenSoC assembles are fully characterized for timing and resource consumption, and XCORE’s deterministic architecture ensures those properties remain true when the system runs on real hardware. As Mark put it, you could try a similar approach with a conventional MCU, but asynchronous interrupts, caches, and system state would destroy any confidence you have that the generated design would actually behave as intended.
Because XCORE provides statically predictable behavior, GenSoC can move beyond “best-effort code generation” into something closer to behavioral compilation: transforming a high-level description of what you want into a system you can realistically take to production.
Perhaps the most interesting implication of all this is what it does to accessibility. GenSoC doesn’t eliminate complexity; it absorbs it. Engineers can stay at the level of their application domain (audio, control, robotics, sensing) without first mastering the device’s microarchitecture. That doesn’t just speed up experienced teams; it makes whole classes of systems accessible to people who would previously have bounced off the learning curve.
In short, GenSoC isn’t about using AI on XCORE. It’s about using AI to unlock what XCORE has been able to do all along—realize deterministic, parallel, real-time systems—by finally giving engineers a way to talk to silicon in the language they already use to think.
The first release of GenSoC targets audio applications. I asked Mark for some examples. I soon regretted asking because he hit me with a deluge of possibilities (and I wasn’t suitably dressed for a deluge) as follows:
Music & Audio Production
- Digital Audio Workstations (DAWs) – with EQ, compression, reverb, and other effects
- Synthesizers – real-time audio synthesis with effects processing
- Audio Plugins (VST/AU) – effects and instruments for music production
- Mixing/Mastering Software – professional audio mixing tools with compression, EQ, limiting
Consumer Electronics
- Microphone Preamps – with noise gates, compression, and EQ
- Headphone Amplifiers – with volume control and audio processing
- Portable Amplifiers – guitar/bass amps with effects
- In-ear Monitors (IEMs) – personal mixing systems for live performers
Home Audio & Entertainment
- Smart Speakers – with audio processing and voice enhancement
- Home Theater Systems – surround sound processing and EQ
- Audio Receivers – multichannel audio processing with effects
- Karaoke Systems – echo, reverb, and vocal effects
Noise & Voice Enhancement
- Noise Suppression Apps – for video calls and conferencing (Teams, Zoom, etc.)
- Voice Enhancement Devices – hearing aids, speech clarity enhancement
- Background Noise Cancellation – for podcasts and streaming
- Vocal Processing Units – for singers and streamers
Gaming & Streaming
- Game Audio Engines – dynamic sound processing and effects
- Streaming Software – audio processing for Twitch/YouTube streaming
- Gaming Headsets – spatial audio and voice effects
- Real-time Audio Enhancement – for online gaming
Automotive
- In-car Audio Systems – EQ, surround sound, noise cancellation
- Hands-free Systems – speech enhancement and noise suppression
- Premium Audio Packages – Bose/Harman-style processing
Professional Audio
- Broadcast Audio Processing – compression, limiting, EQ
- Live Sound Reinforcement – mixing consoles and processors
- Podcast Production Tools – audio normalization and effects
- Audio Restoration – noise reduction and quality enhancement
Mobile & Wearables
- Smartphone Audio Apps – equalizers, sound enhancers, effect processors
- Audio Enhancement for Hearing Impaired – personalized frequency processing
- Fitness Wearables – audio feedback and effects
- Smart Earbuds – noise cancellation, spatial audio, voice enhancement
Will you be attending CES 2026? If so, Mark tells me that XMOS will be offering exclusive, hands-on previews of GenSoC. Also, if you’re interested, the XMOS team will be hosting private meetings at The Palazzo at The Venetian Resort throughout CES (you can book your private meeting here).
I don’t know about you, but I’m tremendously enthused by all this. As usual, of course, it’s not all about me (it should be, but it’s not). What do you think about what you’ve read here?



