Oftentimes, the decision comes down to “FPGA or ASIC.” But what if the decision was “FPGA or microprocessor?”
That’s essentially the value proposition from XMOS, the British microprocessor company that pitches its products not as alternatives to the usual rogues’ gallery of CPUs, but as an alternative to an FPGA.
And now that decision gets a little bit tougher.
You see, in the usual hardware/software partitioning that we’re all familiar with, you start out with fixed hardware resources (some combination of a CPU or MCU, some fixed logic, and maybe some programmable logic) and then you begin to apply software. Pretty standard, right?
But as you add software, you also add slowness. Not necessarily in a bad way, but in an unpredictable way. You see, most software is not deterministic. Unlike hardware, you can’t be absolutely sure when (or sometimes, even how) the software will run. When will this I/O instruction execute, exactly? How many clock ticks will elapse between this signal transition and that output? We usually don’t know the answers to these questions. We buy a CPU or MCU that’s fast enough, and then we write software that’s also fast enough to get the job done. This gets particularly tricky in motor control and other real-time applications.
If the job’s not getting done on time, you get faster hardware or you write faster software.
But it’s always a game of margins. You pick a CPU/MCU that’s fast enough to outrun your software problems, and then you write code that’s efficient enough to keep up with the task at hand. Sure, we can simulate, test, and count cycles until we feel comfortable that everything is fast enough. But it’s a rare system that has this all pinned down and provably deterministic.
And the more software you add, the slower and more unpredictable the system gets. That’s just how computers work. The idea is to keep extraneous instructions out of your critical path(s) so that the hardware and software can still keep up with the system they’re controlling.
And if that variability, that unpredictability, isn’t okay with you, then you typically turn to dedicated hardware, often an FPGA. Hardware is nicely deterministic and predictable. It submits to simulation. It gets the job done and isn’t affected by software, wait loops, cache misses, buffer sizes, and other programming pitfalls.
But what if…
What if your microprocessor was totally deterministic? What if it didn’t matter how much software you loaded onto it, it still behaved just the same? What if your hardware CPU acted more like, well, hardware?
Then what you want is an xCore-200, the latest MCU to come out of XMOS. On the outside, the ’200 is an average-looking TQFP-64 microcontroller chip. On the inside, however, it’s something very different from your typical 8051, Cortex-M3, or AVR device. Like other XMOS processors, the newly announced ’200 is a 16-core multiprocessor architecture with internal interconnect, soft peripherals, and a hard real-time scheduler.
The idea is that you program each of the 16 CPUs to carry out a relatively simple, predictable, and totally deterministic task. Nothing fancy. But taken together, you’ve got a powerful little MCU that can be relied upon to do dynamite motor control without the jitter you’d get from a typical MCU’s unpredictability. It’s programmable hardware, without the FPGA tools.
You even get to invent your own interfaces. Because the I/O pins are under software control, you can program one or more of the internal CPUs to wiggle them any way you see fit. Have an obsolete machine interface on the shop floor you need to control? Make it so. Want to make your own high-speed serial or parallel interface? Go for it. Or maybe you’ve got a product that needs to adapt to one of a dozen different interfaces, but not all at the same time. Simply load up the appropriate driver code and be on your way.
Not everything has to be done in code. Some of the more standard interfaces, like Gigabit Ethernet, JTAG, or USB, are actually implemented in hardware. The remaining I/O pins are yours to define, and, even then, XMOS provides a library of driver code for common interfaces and functions.
Because each peripheral driver is encapsulated in a single CPU, performance doesn’t suffer – or indeed, change at all – when you add more code to the other CPUs. That’s a big philosophical change from traditional processors. Even multicore chips can’t entirely separate the software effects of one CPU on another CPU. Threads can compete for shared memory, invalidate each other’s caches, delay interrupt response, and more. We’re back to picking a CPU that’s fast enough and then programming it with enough margin that it won’t miss deadlines. XMOS turns that idea around and says that each CPU must do its work in the time allotted, no cross-contamination allowed. The 16-core xCore-200 simply does precisely 16 times as much as a single-core implementation would. No more, no less.
XMOS’s single-minded focus on determinism and predictability has, predictably, come at a cost. There isn’t a lot of third-party code for XMOS processors. You won’t find Android ports or Linux distributions for the ’200. It’s more of a driver-writing platform than a mainstream OS processor. Many users plop an XMOS processor down alongside a traditional CPU or MCU chip, as kind of an accelerator. It makes a good user-definable black box. Or an alternative to an FPGA for people who’d rather wield a compiler than an EDA tool.
10 thoughts on “XMOS xCore-200 Wants to Replace Peripherals”
The XMOS approach is MORE hardware, amounting to one CPU per driver, but it still requires software.
There is an alternative for simple systems. NL is an ALL HARDWARE method (for time- or safety-critical functions) that has NO run-time code.
You may read a preview of “Natural Logic of Space and Time” (NL) on CreateSpace: