feature article
Subscribe Now

What Do We Do About Multicore?

I’m always suspicious when a PowerPoint slide says we’re at a turning point in history. It strikes me as egotistical to think that today is somehow qualitatively different from yesterday. Sure, chips always get faster and software always gets more complex – how is that an inflection point? You’re just trying to sell me something, aren’t you?

The exception to this self-imposed rule is multicore microprocessors. I really do think that multicore is a game-changer. It makes hardware design different, it makes software design different, it makes EDA and software-development tools different, and it makes jobs different. Multicore isn’t just “more better faster.” It’s time to think different.

Paradoxically, multicore has been around for a long time. Talk to anyone in research or university, and they’ll tell you they have been studying, modeling, and even building multicore systems for decades. It’s only recently, however, that the multicore phenomenon has entered the public consciousness. Everyone thinks it’s new when in fact it’s been around for ages. That’s both good news and bad news.

The good news is, a lot of smart people have been studying multicore processors for years, looking at ways to partition them, program them, debug them, and design them. The bad news is, they haven’t found many good answers. The worse news is, this is about to become your problem. The monster has escaped from the research laboratory and is bad-assing around town, wreaking havoc on the screaming populace. That would be you, Dear Reader.

Who’s Doing What To Whom

At last week’s Multicore Virtual Conference (an online “meeting” that’s available in its entirety at www.EETimes.com/multicore), nearly 20 experts from around the world debated what to do. In the keynote, Stanford University’s EE/CS Professor Kunle Olukotun pointed out that processor cores have become the new Moore’s Law: their number doubles every year or so. Today’s four-core processor chip becomes tomorrow’s eight-core beast, and so on. Run for your lives, it’s mutating! This Andromeda Strain–like redoubling has no clear end in sight, either. Chips with over 4000 processor cores have already been sighted in the vicinity of Silicon Valley.

During a software-oriented panel discussion, the experts pondered the role of programming languages. Can mere C code adequately corral multicore processors, or do we all need to learn a new language? The panel seemed split; some thought that existing languages could, in the right hands, effectively manage multicore code, while others agitated for new and different tools. (In the spirit of full disclosure I should point out that I was co-chairman of the conference and moderator of most of the sessions.)

A lot of studies have focused on C as a tool for multicore programming, and most have found it wanting. The feeling is that C basically sucks at multicore code. That is, its syntax and vocabulary can’t effectively express parallelism because the language itself doesn’t support it. Most other programming languages also fall into this category; they’re inherently serial because we unconsciously created them that way, just as human languages are inherently serial. Perhaps as a species we’re not suited to expressing parallelism in any efficient manner.

Barring major evolution in our languages or ourselves, we’re left with C and Java and a handful of other popular programming languages as the default tools for spackling serial code over the rough surface of multicore processors. It sticks, but it ain’t pretty.

The conference’s hardware panel looked at, well, hardware design. How many cores are enough and how many are too many? The answer, as is so often the case, depends on what you’re doing and whom you’re trying to impress. Two identical cores will work fine for some “embarrassingly parallel” tasks, while other chips sport a dozen processor cores, all different. The mix-and-match approach (in tech speak: heterogeneous) is actually the easier of the two to program. When all your cores are different, it’s pretty easy to decide which code should be running on which core. It’s when you have a pool of four, eight, or 16 identical cores that things get tricky. How do you partition the workload and who (or what) decides?

Both audience members and speakers pointed the finger at operating system vendors – who pointed right back. Many programmers felt it was the task of the OS to carve up the software workload among the various hardware resources. The OS vendors pushed back, saying (with some justification) that’s impossible. The OS has no magic knowledge of how tasks should be run, or where the parallelism lies. Conflicts, interlocks, and dependencies are simply not divinable through any automated means. The programmer has to know what he’s doing.

Whose Job Is It, Anyway?

The problem is, most programmers don’t know what they’re doing. At least, not when it comes to multicore. Throughout the conference, one message crept through: nobody wants to admit that we’ve got to learn our trade all over again. We’re all looking for a magic bullet that isn’t coming. What we want is a superhero; what we’ve got is ourselves.

Software people say they never asked for multicore; that it was foisted on them by the hardware guys. Hardware people say they never asked for multicore, either. Everyone wanted faster chips at lower power, and multicore was the only way to get there. Be careful what you wish for; you might get it. Now our wish has been granted. Processors are faster, cheaper, and more power-efficient than ever before, but we have little idea what to do with them.

Fortunately, there’s no sin in wasting processor power. If we don’t use all the performance of a multicore processor, that’s okay. But to move ahead – to stay on the Moore’s Law treadmill – we’ve got to learn to harness this unruly beast. And it’s not going to be easy. There’s no magic compiler, operating system, or EDA tool on the horizon that will effortlessly lift this burden from our shoulders. It’s going to mean relearning hardware design and relearning programming.

“May you live in interesting times” was a Chinese curse, not a blessing. Here in the embedded-design industry, we are blessed to be living in an accursed interesting time.

Leave a Reply

featured blogs
Jan 26, 2023
Are you experienced in using SVA? It's been around for a long time, and it's tempting to think there's nothing new to learn. Have you ever come across situations where SVA can't solve what appears to be a simple problem? What if you wanted to code an assertion that a signal r...
Jan 24, 2023
We explain embedded magnetoresistive random access memory (eMRAM) and its low-power SoC design applications as a non-volatile memory alternative to SRAM & Flash. The post Why Embedded MRAMs Are the Future for Advanced-Node SoCs appeared first on From Silicon To Software...
Jan 19, 2023
Are you having problems adjusting your watch strap or swapping out your watch battery? If so, I am the bearer of glad tidings....
Jan 16, 2023
By Slava Zhuchenya So your net trace has too much parasitic resistance. Where is it coming from? You ran your… ...

featured video

Synopsys 224G & 112G Ethernet PHY IP OIF Interop at ECOC 2022

Sponsored by Synopsys

This Featured Video shows four demonstrations of the Synopsys 224G and 112G Ethernet PHY IP long and medium reach performance, interoperating with third-party channels and SerDes.

Learn More

featured chalk talk

224 Gbps Data Rates: Separating Fact from Fiction

Sponsored by Samtec

Data rates are getting faster with each passing year. In this episode of Chalk Talk, Amelia Dalton chats with Matthew Burns from Samtec to separate fact from fiction when it comes to 224 Gbps data rates. They take a closer look at the design challenges, the tradeoffs, and architectural decisions that we will need to consider when designing a 224 Gbps design. They also investigate the variety of interconnect solutions that Samtec offers for your next 224 Gbps design.

Click here for more information about Silicon-to-Silicon Application Solutions from Samtec