feature article
Subscribe Now

What Do We Do About Multicore?

I’m always suspicious when a PowerPoint slide says we’re at a turning point in history. It strikes me as egotistical to think that today is somehow qualitatively different from yesterday. Sure, chips always get faster and software always gets more complex – how is that an inflection point? You’re just trying to sell me something, aren’t you?

The exception to this self-imposed rule is multicore microprocessors. I really do think that multicore is a game-changer. It makes hardware design different, it makes software design different, it makes EDA and software-development tools different, and it makes jobs different. Multicore isn’t just “more better faster.” It’s time to think different.

Paradoxically, multicore has been around for a long time. Talk to anyone in research or university, and they’ll tell you they have been studying, modeling, and even building multicore systems for decades. It’s only recently, however, that the multicore phenomenon has entered the public consciousness. Everyone thinks it’s new when in fact it’s been around for ages. That’s both good news and bad news.

The good news is, a lot of smart people have been studying multicore processors for years, looking at ways to partition them, program them, debug them, and design them. The bad news is, they haven’t found many good answers. The worse news is, this is about to become your problem. The monster has escaped from the research laboratory and is bad-assing around town, wreaking havoc on the screaming populace. That would be you, Dear Reader.

Who’s Doing What To Whom

At last week’s Multicore Virtual Conference (an online “meeting” that’s available in its entirety at www.EETimes.com/multicore), nearly 20 experts from around the world debated what to do. In the keynote, Stanford University’s EE/CS Professor Kunle Olukotun pointed out that processor cores have become the new Moore’s Law: their number doubles every year or so. Today’s four-core processor chip becomes tomorrow’s eight-core beast, and so on. Run for your lives, it’s mutating! This Andromeda Strain–like redoubling has no clear end in sight, either. Chips with over 4000 processor cores have already been sighted in the vicinity of Silicon Valley.

During a software-oriented panel discussion, the experts pondered the role of programming languages. Can mere C code adequately corral multicore processors, or do we all need to learn a new language? The panel seemed split; some thought that existing languages could, in the right hands, effectively manage multicore code, while others agitated for new and different tools. (In the spirit of full disclosure I should point out that I was co-chairman of the conference and moderator of most of the sessions.)

A lot of studies have focused on C as a tool for multicore programming, and most have found it wanting. The feeling is that C basically sucks at multicore code. That is, its syntax and vocabulary can’t effectively express parallelism because the language itself doesn’t support it. Most other programming languages also fall into this category; they’re inherently serial because we unconsciously created them that way, just as human languages are inherently serial. Perhaps as a species we’re not suited to expressing parallelism in any efficient manner.

Barring major evolution in our languages or ourselves, we’re left with C and Java and a handful of other popular programming languages as the default tools for spackling serial code over the rough surface of multicore processors. It sticks, but it ain’t pretty.

The conference’s hardware panel looked at, well, hardware design. How many cores are enough and how many are too many? The answer, as is so often the case, depends on what you’re doing and whom you’re trying to impress. Two identical cores will work fine for some “embarrassingly parallel” tasks, while other chips sport a dozen processor cores, all different. The mix-and-match approach (in tech speak: heterogeneous) is actually the easier of the two to program. When all your cores are different, it’s pretty easy to decide which code should be running on which core. It’s when you have a pool of four, eight, or 16 identical cores that things get tricky. How do you partition the workload and who (or what) decides?

Both audience members and speakers pointed the finger at operating system vendors – who pointed right back. Many programmers felt it was the task of the OS to carve up the software workload among the various hardware resources. The OS vendors pushed back, saying (with some justification) that’s impossible. The OS has no magic knowledge of how tasks should be run, or where the parallelism lies. Conflicts, interlocks, and dependencies are simply not divinable through any automated means. The programmer has to know what he’s doing.

Whose Job Is It, Anyway?

The problem is, most programmers don’t know what they’re doing. At least, not when it comes to multicore. Throughout the conference, one message crept through: nobody wants to admit that we’ve got to learn our trade all over again. We’re all looking for a magic bullet that isn’t coming. What we want is a superhero; what we’ve got is ourselves.

Software people say they never asked for multicore; that it was foisted on them by the hardware guys. Hardware people say they never asked for multicore, either. Everyone wanted faster chips at lower power, and multicore was the only way to get there. Be careful what you wish for; you might get it. Now our wish has been granted. Processors are faster, cheaper, and more power-efficient than ever before, but we have little idea what to do with them.

Fortunately, there’s no sin in wasting processor power. If we don’t use all the performance of a multicore processor, that’s okay. But to move ahead – to stay on the Moore’s Law treadmill – we’ve got to learn to harness this unruly beast. And it’s not going to be easy. There’s no magic compiler, operating system, or EDA tool on the horizon that will effortlessly lift this burden from our shoulders. It’s going to mean relearning hardware design and relearning programming.

“May you live in interesting times” was a Chinese curse, not a blessing. Here in the embedded-design industry, we are blessed to be living in an accursed interesting time.

Leave a Reply

featured blogs
Jun 6, 2023
Learn about our PVT Monitor IP, a key component of our SLM chip monitoring solutions, which successfully taped out on TSMC's N5 and N3E processes. The post Synopsys Tapes Out SLM PVT Monitor IP on TSMC N5 and N3E Processes appeared first on New Horizons for Chip Design....
Jun 6, 2023
At this year's DesignCon, Meta held a session on '˜PowerTree-Based PDN Analysis, Correlation, and Signoff for MR/AR Systems.' Presented by Kundan Chand and Grace Yu from Meta, they talked about power integrity (PI) analysis using Sigrity Aurora and Power Integrity tools such...
Jun 2, 2023
I just heard something that really gave me pause for thought -- the fact that everyone experiences two forms of death (given a choice, I'd rather not experience even one)....

featured video

Synopsys Solution for RTL to Signoff Power Analysis

Sponsored by Synopsys

Synopsys’ industry-leading power analysis solution built on PrimePower technology that enables early RTL exploration, low power implementation and power signoff for design of energy-efficient SoCs.

Learn more about Synopsys’ Energy-Efficient SoCs Solutions

featured paper

EC Solver Tech Brief

Sponsored by Cadence Design Systems

The Cadence® Celsius™ EC Solver supports electronics system designers in managing the most challenging thermal/electronic cooling problems quickly and accurately. By utilizing a powerful computational engine and meshing technology, designers can model and analyze the fluid flow and heat transfer of even the most complex electronic system and ensure the electronic cooling system is reliable.

Click to read more

featured chalk talk

EdgeLock® Secure Element & Secure Authenticator
Today’s IoT designs demand comprehensive security implementation, but incorporating a robust security solution in your design can be a complicated and time-consuming process. In this episode of Chalk Talk, Amelia Dalton and Antje Schutz from NXP explore NXP’s EdgeLock Secure Element and Secure Authenticator Solution. They examine how this flexible, future-proof and easy to deploy solution can be a great fit for a variety of IoT designs.
Sep 8, 2022
32,088 views