feature article
Subscribe Now

What Do We Do About Multicore?

I’m always suspicious when a PowerPoint slide says we’re at a turning point in history. It strikes me as egotistical to think that today is somehow qualitatively different from yesterday. Sure, chips always get faster and software always gets more complex – how is that an inflection point? You’re just trying to sell me something, aren’t you?

The exception to this self-imposed rule is multicore microprocessors. I really do think that multicore is a game-changer. It makes hardware design different, it makes software design different, it makes EDA and software-development tools different, and it makes jobs different. Multicore isn’t just “more better faster.” It’s time to think different.

Paradoxically, multicore has been around for a long time. Talk to anyone in research or university, and they’ll tell you they have been studying, modeling, and even building multicore systems for decades. It’s only recently, however, that the multicore phenomenon has entered the public consciousness. Everyone thinks it’s new when in fact it’s been around for ages. That’s both good news and bad news.

The good news is, a lot of smart people have been studying multicore processors for years, looking at ways to partition them, program them, debug them, and design them. The bad news is, they haven’t found many good answers. The worse news is, this is about to become your problem. The monster has escaped from the research laboratory and is bad-assing around town, wreaking havoc on the screaming populace. That would be you, Dear Reader.

Who’s Doing What To Whom

At last week’s Multicore Virtual Conference (an online “meeting” that’s available in its entirety at www.EETimes.com/multicore), nearly 20 experts from around the world debated what to do. In the keynote, Stanford University’s EE/CS Professor Kunle Olukotun pointed out that processor cores have become the new Moore’s Law: their number doubles every year or so. Today’s four-core processor chip becomes tomorrow’s eight-core beast, and so on. Run for your lives, it’s mutating! This Andromeda Strain–like redoubling has no clear end in sight, either. Chips with over 4000 processor cores have already been sighted in the vicinity of Silicon Valley.

During a software-oriented panel discussion, the experts pondered the role of programming languages. Can mere C code adequately corral multicore processors, or do we all need to learn a new language? The panel seemed split; some thought that existing languages could, in the right hands, effectively manage multicore code, while others agitated for new and different tools. (In the spirit of full disclosure I should point out that I was co-chairman of the conference and moderator of most of the sessions.)

A lot of studies have focused on C as a tool for multicore programming, and most have found it wanting. The feeling is that C basically sucks at multicore code. That is, its syntax and vocabulary can’t effectively express parallelism because the language itself doesn’t support it. Most other programming languages also fall into this category; they’re inherently serial because we unconsciously created them that way, just as human languages are inherently serial. Perhaps as a species we’re not suited to expressing parallelism in any efficient manner.

Barring major evolution in our languages or ourselves, we’re left with C and Java and a handful of other popular programming languages as the default tools for spackling serial code over the rough surface of multicore processors. It sticks, but it ain’t pretty.

The conference’s hardware panel looked at, well, hardware design. How many cores are enough and how many are too many? The answer, as is so often the case, depends on what you’re doing and whom you’re trying to impress. Two identical cores will work fine for some “embarrassingly parallel” tasks, while other chips sport a dozen processor cores, all different. The mix-and-match approach (in tech speak: heterogeneous) is actually the easier of the two to program. When all your cores are different, it’s pretty easy to decide which code should be running on which core. It’s when you have a pool of four, eight, or 16 identical cores that things get tricky. How do you partition the workload and who (or what) decides?

Both audience members and speakers pointed the finger at operating system vendors – who pointed right back. Many programmers felt it was the task of the OS to carve up the software workload among the various hardware resources. The OS vendors pushed back, saying (with some justification) that’s impossible. The OS has no magic knowledge of how tasks should be run, or where the parallelism lies. Conflicts, interlocks, and dependencies are simply not divinable through any automated means. The programmer has to know what he’s doing.

Whose Job Is It, Anyway?

The problem is, most programmers don’t know what they’re doing. At least, not when it comes to multicore. Throughout the conference, one message crept through: nobody wants to admit that we’ve got to learn our trade all over again. We’re all looking for a magic bullet that isn’t coming. What we want is a superhero; what we’ve got is ourselves.

Software people say they never asked for multicore; that it was foisted on them by the hardware guys. Hardware people say they never asked for multicore, either. Everyone wanted faster chips at lower power, and multicore was the only way to get there. Be careful what you wish for; you might get it. Now our wish has been granted. Processors are faster, cheaper, and more power-efficient than ever before, but we have little idea what to do with them.

Fortunately, there’s no sin in wasting processor power. If we don’t use all the performance of a multicore processor, that’s okay. But to move ahead – to stay on the Moore’s Law treadmill – we’ve got to learn to harness this unruly beast. And it’s not going to be easy. There’s no magic compiler, operating system, or EDA tool on the horizon that will effortlessly lift this burden from our shoulders. It’s going to mean relearning hardware design and relearning programming.

“May you live in interesting times” was a Chinese curse, not a blessing. Here in the embedded-design industry, we are blessed to be living in an accursed interesting time.

Leave a Reply

featured blogs
May 24, 2024
Could these creepy crawly robo-critters be the first step on a slippery road to a robot uprising coupled with an insect uprising?...
May 23, 2024
We're investing in semiconductor workforce development programs in Latin America, including government and academic partnerships to foster engineering talent.The post Building the Semiconductor Workforce in Latin America appeared first on Chip Design....

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Achieve Greater Design Flexibility and Reduce Costs with Chiplets

Sponsored by Keysight

Chiplets are a new way to build a system-on-chips (SoCs) to improve yields and reduce costs. It partitions the chip into discrete elements and connects them with a standardized interface, enabling designers to meet performance, efficiency, power, size, and cost challenges in the 5 / 6G, artificial intelligence (AI), and virtual reality (VR) era. This white paper will discuss the shift to chiplet adoption and Keysight EDA's implementation of the communication standard (UCIe) into the Keysight Advanced Design System (ADS).

Dive into the technical details – download now.

featured chalk talk

PolarFire® SoC FPGAs: Integrate Linux® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
14,486 views