feature article
Subscribe Now

x86 in Embedded Systems

“Only two things are infinite: the universe and human stupidity, and I’m not sure about the universe.” Thus spake Albert Einstein, a man who knew a thing or two about big spaces and long stretches of time. Given a few more years, he might have made the same observation about Intel’s x86 processor family and its infinite attraction to programmers. 

If I may indulge in another quotation, it’s been said that there are two kinds of programmers: those who admit they hate the x86, and liars. Nobody really likes programming x86-family processors, at least nobody I’ve ever met who’d worked with any other chip family. 

If there were a god of microprocessor design, She would have smitten Intel long ago for creating such an abomination. The entire family tree comes from a bad seed, bearing the taint of the original 8086 and its spawn. Strong men weep, children faint, and wolves chew their own legs off rather than program x86 chips in assembly language. The sight of too much x86 code has been known to drive men mad. Its importation is limited in some countries. Humanitarian efforts bring aid to countless victims. International talks proceed quietly behind the scenes to ban its use and manufacture. Yet stockpiles exist to this day. Stop the madness!

Apocalyptic Cataclysm

Or something like that. In actual fact, the x86 has never been more popular, especially in embedded systems. Sure, we all equate “x86” with “PC,” and the PC business has certainly been very good to Intel, to say nothing of AMD and a handful of other x86 producers. But the original 8086 started out as an embedded processor, as did the 8080, 8008, and 4004 before it. And even now, more x86 processors are used in embedded applications than in all the world’s PCs put together. 

Financially, the x86 franchise is a goldmine. Intel is the world’s largest semiconductor manufacturer by revenue, yet x86 unit volume is very small. All of Intel’s PC processors put together account for perhaps 2% of the world’s microprocessors; the other 98% are all embedded processors from other companies. That is what’s known in financial circles as serious leverage. 

Technically, the x86 architecture is a horror show. It’s the poster child for RISC extremists. Mothers, don’t let your sons grow up to be x86 programmers. This is your brain on x86, etc. Its abstruse and byzantine internal design is a veritable plumber’s nightmare of engineering mishaps, misfortunes, and calamities, a silicon house of cards waiting to collapse on the unwary programmer. There’s nothing regular, elegant, or orthogonal about it. To use a disparaging military analogy, it’s like the F-4 Phantom: proof that, with a big enough engine, even a brick can fly. 

Yet the x86 architecture soldiers on. You can still buy 8086 chips and even download knockoff 8086, ’286, and ’386 designs for your FPGA. Pentium and CoreDuo processors find their way into embedded systems all the time, and entire PC motherboards are commonly found in cash registers and airport kiosks. Surely there’s something behind all this widespread popularity?

Acceptance

There are good reasons for x86’s popularity, of course, and one of them is inertia. In Einstein’s world, physical inertia is a powerful force guiding the motions of planets as well as subatomic particles (we think). In engineering, inertia is no less powerful, guiding boardrooms and lab benches alike. If the previous generation of the product used Brand X processor, the next generation probably will, too. What’s the point of switching if the old chip works? And as the oldest microprocessor architecture in the world, x86 has a head start over absolutely everyone in terms of longevity. A body at rest tends to stay at rest unless disturbed by an outside force, and all that.

That outside force came in the form of dozens of new processor families and chip makers over a 30-year period. They made a dent in the massive inertial incumbency of x86 but didn’t dislodge it. Where new chip families like ARM and MIPS and PowerPC caught on was where the x86 wasn’t: where brand-new applications sprung up that owed nothing to the early embedded or PC-based systems that x86 dominated. Like Willie Keeler said, hit ’em where they ain’t. Then you become the incumbent, as ARM has done with mobile telephones. 

The other reason for x86’s enduring popularity is software. Not just the big backlog of existing software, which is a side-effect of longevity, but new software. Software-development trends actually favor the x86, in spite of its software-unfriendly design and venerable hardware architecture. High-level languages, Java, and virtualization all work to prolong the x86’s dominance. 

Fewer and fewer programmers work with assembly language anymore. Most are using C or C++ (or using C and calling it C++), with a few more using Java, BASIC, LabView, Pascal, and other specialty languages. These high-level languages hide the processor’s inner mysteries from view, so most programmers care very little about the meat grinder that lurks inside their chip. That’s the compiler’s problem. C code looks pretty much the same no matter what your target. As assembly-language programmers get laid off, retire, or die, the close tie between code base and processor architecture unravels. Software becomes increasingly “generic” and CPU-independent, and inertia wins again.

Redemption

Is any of this bad news? Not at all. It’s said that market success is inversely proportional to technical elegance. If that’s true, using x86 processors would practically guarantee victory and treasure. (It’s certainly worked for Intel; less so for AMD, NexGen, Chips & Technologies, Rise, Centaur, Cyrix, MemoryLogix, Transmeta, and others.) 

Setting engineering prejudices aside, the x86 family has some practical advantages that lend it continued success. It’s certainly popular, and, just as in the celebrity world, popularity breeds more popularity. There’s nothing wrong with joining a big and well-supported crowd. It’s easy to hire x86 programmers. It’s easy to find technical support when a million other developers are using the same chip as you. And it’s unlikely that you’ll discover a bug that no one else has seen. There’s comfort in knowing you’re traveling well-trodden ground. Bosses don’t like surprises and there are few surprises left in this old girl. 

There’s also the huge base of software, from real-time kernels to operating systems to middleware to finished applications. If your value-add isn’t in one of these layers, why not just use what the vast user base has to offer? An adage about inventing wheels comes to mind. 

The x86 family also has very good code density, a happy side effect of its pathologically irregular instruction set. For all their academic appeal, RISC processors got this feature badly wrong (as we saw last week). If memory space is important, x86 is a decent choice. 

Power consumption is one area where x86 really falls down, hardware-wise. For all its talk about “ultra low power” and “mobile” processors, Intel can’t paper over its processors’ deficiencies in power efficiency. An x86 chip easily consumes 3x to 5x the power of a more modern processor at the same performance level. There are some signs of doddering old age you just can’t hide.

Still, for all its faults and foibles, the x86 is here to stay. We love it (collectively, if not individually) like an old dog: lame and smelly, but also familiar and predictable. It’s the “go to” chip for projects that don’t need specific hardware features or special requirements. It’s the vanilla ice cream amid the 31 designer flavors: often boring but rarely truly bad. At least you know what you’re getting. It divides the engineering world into x86 users and everyone else. And as a great person once said, “there are 10 kinds of people: those who understand binary arithmetic and those who don’t.”

One thought on “x86 in Embedded Systems”

  1. I wish that I could find one other person that has not been brain washed into believing the hype about superscalar, out of order execution, loooong pipelines, multi-core, etc.

    Since day one, cpu performance has mainly been limited by memory. WELL IT STILL IS! Before cache was invented memory access was interleaved to allow more than one memory access at a time. But cache transfers blocks of data so that memory accesses must be timed to allow the block transfer time.

    The whole super scalar/cache approach was based on intuitive performance gains when doing matrix inversion. NOT GENERAL PURPOSE COMPUTING.

    Microsoft research “Where’s the beef?” found that the lowly FPGA can out perform cpus, x86 super scalar, etc. Now it seems that Apple is on the heterogeneous bandwagon.

    Now that it has been shown that the interrupt scheme needs to be replaced … maybe THE END IS NEAR.

    And there’s another thing, everybody seems to think that self modifying code is a wonderful necessary thing.

    AND THE HACKERS JUST LOVE THE OPEN DOOR!

Leave a Reply

featured blogs
Oct 9, 2024
Have you ever noticed that dogs tend to circle around a few times before they eventually take a weight off their minds?...

featured chalk talk

Routing Signals, Data, and Power in Rugged Applications
Sponsored by Mouser Electronics and Samtec
In this episode of Chalk Talk, Amelia Dalton and Matthew Burns from Samtec investigate the design challenges involved with routing signals, data and power in rugged applications. They also explore the benefits that Samtec’s URSA® I/O Ultra Rugged Cable System brings to rugged applications and why the extreme density and the hyperboloid-type contact sets this cable system apart from other solutions on the market today.
Sep 16, 2024
25,001 views