feature article
Subscribe Now

What Do We Do About Multicore?

I’m always suspicious when a PowerPoint slide says we’re at a turning point in history. It strikes me as egotistical to think that today is somehow qualitatively different from yesterday. Sure, chips always get faster and software always gets more complex – how is that an inflection point? You’re just trying to sell me something, aren’t you?

The exception to this self-imposed rule is multicore microprocessors. I really do think that multicore is a game-changer. It makes hardware design different, it makes software design different, it makes EDA and software-development tools different, and it makes jobs different. Multicore isn’t just “more better faster.” It’s time to think different.

Paradoxically, multicore has been around for a long time. Talk to anyone in research or university, and they’ll tell you they have been studying, modeling, and even building multicore systems for decades. It’s only recently, however, that the multicore phenomenon has entered the public consciousness. Everyone thinks it’s new when in fact it’s been around for ages. That’s both good news and bad news.

The good news is, a lot of smart people have been studying multicore processors for years, looking at ways to partition them, program them, debug them, and design them. The bad news is, they haven’t found many good answers. The worse news is, this is about to become your problem. The monster has escaped from the research laboratory and is bad-assing around town, wreaking havoc on the screaming populace. That would be you, Dear Reader.

Who’s Doing What To Whom

At last week’s Multicore Virtual Conference (an online “meeting” that’s available in its entirety at www.EETimes.com/multicore), nearly 20 experts from around the world debated what to do. In the keynote, Stanford University’s EE/CS Professor Kunle Olukotun pointed out that processor cores have become the new Moore’s Law: their number doubles every year or so. Today’s four-core processor chip becomes tomorrow’s eight-core beast, and so on. Run for your lives, it’s mutating! This Andromeda Strain–like redoubling has no clear end in sight, either. Chips with over 4000 processor cores have already been sighted in the vicinity of Silicon Valley.

During a software-oriented panel discussion, the experts pondered the role of programming languages. Can mere C code adequately corral multicore processors, or do we all need to learn a new language? The panel seemed split; some thought that existing languages could, in the right hands, effectively manage multicore code, while others agitated for new and different tools. (In the spirit of full disclosure I should point out that I was co-chairman of the conference and moderator of most of the sessions.)

A lot of studies have focused on C as a tool for multicore programming, and most have found it wanting. The feeling is that C basically sucks at multicore code. That is, its syntax and vocabulary can’t effectively express parallelism because the language itself doesn’t support it. Most other programming languages also fall into this category; they’re inherently serial because we unconsciously created them that way, just as human languages are inherently serial. Perhaps as a species we’re not suited to expressing parallelism in any efficient manner.

Barring major evolution in our languages or ourselves, we’re left with C and Java and a handful of other popular programming languages as the default tools for spackling serial code over the rough surface of multicore processors. It sticks, but it ain’t pretty.

The conference’s hardware panel looked at, well, hardware design. How many cores are enough and how many are too many? The answer, as is so often the case, depends on what you’re doing and whom you’re trying to impress. Two identical cores will work fine for some “embarrassingly parallel” tasks, while other chips sport a dozen processor cores, all different. The mix-and-match approach (in tech speak: heterogeneous) is actually the easier of the two to program. When all your cores are different, it’s pretty easy to decide which code should be running on which core. It’s when you have a pool of four, eight, or 16 identical cores that things get tricky. How do you partition the workload and who (or what) decides?

Both audience members and speakers pointed the finger at operating system vendors – who pointed right back. Many programmers felt it was the task of the OS to carve up the software workload among the various hardware resources. The OS vendors pushed back, saying (with some justification) that’s impossible. The OS has no magic knowledge of how tasks should be run, or where the parallelism lies. Conflicts, interlocks, and dependencies are simply not divinable through any automated means. The programmer has to know what he’s doing.

Whose Job Is It, Anyway?

The problem is, most programmers don’t know what they’re doing. At least, not when it comes to multicore. Throughout the conference, one message crept through: nobody wants to admit that we’ve got to learn our trade all over again. We’re all looking for a magic bullet that isn’t coming. What we want is a superhero; what we’ve got is ourselves.

Software people say they never asked for multicore; that it was foisted on them by the hardware guys. Hardware people say they never asked for multicore, either. Everyone wanted faster chips at lower power, and multicore was the only way to get there. Be careful what you wish for; you might get it. Now our wish has been granted. Processors are faster, cheaper, and more power-efficient than ever before, but we have little idea what to do with them.

Fortunately, there’s no sin in wasting processor power. If we don’t use all the performance of a multicore processor, that’s okay. But to move ahead – to stay on the Moore’s Law treadmill – we’ve got to learn to harness this unruly beast. And it’s not going to be easy. There’s no magic compiler, operating system, or EDA tool on the horizon that will effortlessly lift this burden from our shoulders. It’s going to mean relearning hardware design and relearning programming.

“May you live in interesting times” was a Chinese curse, not a blessing. Here in the embedded-design industry, we are blessed to be living in an accursed interesting time.

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Peak Power Introduction and Solutions
Sponsored by Mouser Electronics and MEAN WELL
In this episode of Chalk Talk, Amelia Dalton and Karim Bheiry from MEAN WELL explore why motors and capacitors need peak current during startup, the parameters to keep in mind when choosing your next power supply for these kind of designs, and the specific applications where MEAN WELL’s enclosed power supplies with peak power would bring the most benefit.
Jan 22, 2024
13,464 views