feature article
Subscribe Now

What Do We Do About Multicore?

I’m always suspicious when a PowerPoint slide says we’re at a turning point in history. It strikes me as egotistical to think that today is somehow qualitatively different from yesterday. Sure, chips always get faster and software always gets more complex – how is that an inflection point? You’re just trying to sell me something, aren’t you?

The exception to this self-imposed rule is multicore microprocessors. I really do think that multicore is a game-changer. It makes hardware design different, it makes software design different, it makes EDA and software-development tools different, and it makes jobs different. Multicore isn’t just “more better faster.” It’s time to think different.

Paradoxically, multicore has been around for a long time. Talk to anyone in research or university, and they’ll tell you they have been studying, modeling, and even building multicore systems for decades. It’s only recently, however, that the multicore phenomenon has entered the public consciousness. Everyone thinks it’s new when in fact it’s been around for ages. That’s both good news and bad news.

The good news is, a lot of smart people have been studying multicore processors for years, looking at ways to partition them, program them, debug them, and design them. The bad news is, they haven’t found many good answers. The worse news is, this is about to become your problem. The monster has escaped from the research laboratory and is bad-assing around town, wreaking havoc on the screaming populace. That would be you, Dear Reader.

Who’s Doing What To Whom

At last week’s Multicore Virtual Conference (an online “meeting” that’s available in its entirety at www.EETimes.com/multicore), nearly 20 experts from around the world debated what to do. In the keynote, Stanford University’s EE/CS Professor Kunle Olukotun pointed out that processor cores have become the new Moore’s Law: their number doubles every year or so. Today’s four-core processor chip becomes tomorrow’s eight-core beast, and so on. Run for your lives, it’s mutating! This Andromeda Strain–like redoubling has no clear end in sight, either. Chips with over 4000 processor cores have already been sighted in the vicinity of Silicon Valley.

During a software-oriented panel discussion, the experts pondered the role of programming languages. Can mere C code adequately corral multicore processors, or do we all need to learn a new language? The panel seemed split; some thought that existing languages could, in the right hands, effectively manage multicore code, while others agitated for new and different tools. (In the spirit of full disclosure I should point out that I was co-chairman of the conference and moderator of most of the sessions.)

A lot of studies have focused on C as a tool for multicore programming, and most have found it wanting. The feeling is that C basically sucks at multicore code. That is, its syntax and vocabulary can’t effectively express parallelism because the language itself doesn’t support it. Most other programming languages also fall into this category; they’re inherently serial because we unconsciously created them that way, just as human languages are inherently serial. Perhaps as a species we’re not suited to expressing parallelism in any efficient manner.

Barring major evolution in our languages or ourselves, we’re left with C and Java and a handful of other popular programming languages as the default tools for spackling serial code over the rough surface of multicore processors. It sticks, but it ain’t pretty.

The conference’s hardware panel looked at, well, hardware design. How many cores are enough and how many are too many? The answer, as is so often the case, depends on what you’re doing and whom you’re trying to impress. Two identical cores will work fine for some “embarrassingly parallel” tasks, while other chips sport a dozen processor cores, all different. The mix-and-match approach (in tech speak: heterogeneous) is actually the easier of the two to program. When all your cores are different, it’s pretty easy to decide which code should be running on which core. It’s when you have a pool of four, eight, or 16 identical cores that things get tricky. How do you partition the workload and who (or what) decides?

Both audience members and speakers pointed the finger at operating system vendors – who pointed right back. Many programmers felt it was the task of the OS to carve up the software workload among the various hardware resources. The OS vendors pushed back, saying (with some justification) that’s impossible. The OS has no magic knowledge of how tasks should be run, or where the parallelism lies. Conflicts, interlocks, and dependencies are simply not divinable through any automated means. The programmer has to know what he’s doing.

Whose Job Is It, Anyway?

The problem is, most programmers don’t know what they’re doing. At least, not when it comes to multicore. Throughout the conference, one message crept through: nobody wants to admit that we’ve got to learn our trade all over again. We’re all looking for a magic bullet that isn’t coming. What we want is a superhero; what we’ve got is ourselves.

Software people say they never asked for multicore; that it was foisted on them by the hardware guys. Hardware people say they never asked for multicore, either. Everyone wanted faster chips at lower power, and multicore was the only way to get there. Be careful what you wish for; you might get it. Now our wish has been granted. Processors are faster, cheaper, and more power-efficient than ever before, but we have little idea what to do with them.

Fortunately, there’s no sin in wasting processor power. If we don’t use all the performance of a multicore processor, that’s okay. But to move ahead – to stay on the Moore’s Law treadmill – we’ve got to learn to harness this unruly beast. And it’s not going to be easy. There’s no magic compiler, operating system, or EDA tool on the horizon that will effortlessly lift this burden from our shoulders. It’s going to mean relearning hardware design and relearning programming.

“May you live in interesting times” was a Chinese curse, not a blessing. Here in the embedded-design industry, we are blessed to be living in an accursed interesting time.

Leave a Reply

featured blogs
May 21, 2022
May is Asian American and Pacific Islander (AAPI) Heritage Month. We would like to spotlight some of our incredible AAPI-identifying employees to celebrate. We recognize the important influence that... ...
May 20, 2022
I'm very happy with my new OMTech 40W CO2 laser engraver/cutter, but only because the folks from Makers Local 256 helped me get it up and running....
May 19, 2022
Learn about the AI chip design breakthroughs and case studies discussed at SNUG Silicon Valley 2022, including autonomous PPA optimization using DSO.ai. The post Key Highlights from SNUG 2022: AI Is Fast Forwarding Chip Design appeared first on From Silicon To Software....
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...

featured video

EdgeQ Creates Big Connections with a Small Chip

Sponsored by Cadence Design Systems

Find out how EdgeQ delivered the world’s first 5G base station on a chip using Cadence’s logic simulation, digital implementation, timing and power signoff, synthesis, and physical verification signoff tools.

Click here for more information

featured paper

Intel Agilex FPGAs Deliver Game-Changing Flexibility & Agility for the Data-Centric World

Sponsored by Intel

The new Intel® Agilex™ FPGA is more than the latest programmable logic offering—it brings together revolutionary innovation in multiple areas of Intel technology leadership to create new opportunities to derive value and meaning from this transformation from edge to data center. Want to know more? Start with this white paper.

Click to read more

featured chalk talk

Faster, More Predictable Path to Multi-Chiplet Design Closure

Sponsored by Cadence Design Systems

The challenges for 3D IC design are greater than standard chip design - but they are not insurmountable. In this episode of Chalk Talk, Amelia Dalton chats with Vinay Patwardhan from Cadence Design Systems about the variety of challenges faced by 3D IC designers today and how Cadence’s integrated, high-capacity Integrity 3D IC Platform, with its 3D design planning and implementation cockpit, flow manager and co-design capabilities will not only help you with your next 3D IC design.

Click here for more information about Integrity 3D-IC Platform from Cadence Design Systems