feature article
Subscribe Now

Whither Embedded

Part Deux

In this second installment of our big embedded-executive brain dump, we delve a bit more into the pros and cons of multicore processors and multiprocessor systems, but also take a look at programming languages and customer trends. Here’s what’s happening. 

All four participants agreed that the first multicore project is the hardest. “Moving from one core to two cores is the biggest step,” said CriticalBlue’s CEO David Stewart. “After that, four, eight, even thirty-two cores is a smaller step.”

Ramana Jampala of CebaTech echoed a similar sentiment. Design engineers “have to prepare; it’s a planning process. Programmers have to plan the migration of their software.” Educating – or reeducating – programmers won’t happen overnight, but it’s mandatory for effective multicore programming. Or even to keep one’s head above water.

One member felt that “each industry segment will get there at its own rate. Networking is already there. Handsets are almost there. Other markets, like automotive and industrial, will take longer.”  

Because of the training and inertia involved, programmers may sidestep multicore programming and instead throw hardware at the problem. Ramana gave an example: “If my customer says to me, ‘Make it go faster, make it more efficient,’ would I risk my sequential software and existing hardware tools for parallelization, or would I try to push the boulder uphill by trying to make this software multithreaded?” In his view, it’s easier to take advantage of hardware accelerators and other specialized offloads than to try to integrate and program more processors.

He gave another example of a networking customer struggling with a high-end Intel “Nehalem” (Core i7) processor. “He could barely get 500–600 Mbps. But teamed with hardware offload in the form of an FPGA, we were able to identify parallelization using automated tools and produce hardware at one-third to one-fourth the cost of the Intel CPU, and 5x better performance. I’m not saying multicore or multithreaded software couldn’t have done the job,” he added, “but the amount of effort involved wouldn’t have been worth it.”

As complex as they are, demand for multicore processors keeps rising. ARM’s Ian Ferguson mentioned one customer (also in the networking business) who is “putting down eight Cortex-A9’s in a base station, in two clusters of four.”

Customers want it, he insists, and ARM is accommodating their wishes. “We made some basic assumptions about how SMP [symmetric multiprocessing] would be instantiated… we’ve never really been in network infrastructure.” In designing the Cortex-A9, ARM assumed that a four-way cluster would be sufficient and designed the core with this in mind. Now that at least one customer is clustering double that number, the company may need to rethink that limitation.

Are we as humans multicore capable?

There’s a school of thought that the problem with multicore programming isn’t the chips or the compilers – it’s us. As humans, we’re simply not built to grok multicore programming. Our thought processes are inherently serial, not parallel (so the reasoning goes), so we’re congenitally ill-suited to the task. If that’s so, is there anything we can do about it?

Perhaps a new programming language is required. Or even a whole new programming paradigm that uses symbols (for example) or flowcharts or schematics. There’s been no lack of effort in this area, all the way from academic research teams to unemployed college students with time on their hands. Why has (almost) no one adopted these new languages?

David Stewart replied, “That’s not how people work. We’ve always been through evolutions, and probably always will.” The rest of the group agreed. There’s never a good point when everyone can change. Working engineers have to get tomorrow’s product out the door, so there’s never a convenient two-year gap to relearn everything. We instead make incremental improvements, precluding any wholesale changes in our programming habits or mindsets. As academically attractive as a new programming paradigm might be, it’s impractical. Like any language, to be of use requires that people actually adopt it and use it.

It’s ten years later; do you know where your customers are?

Toward the end of our discussion we talked about changes in the customer base. Are embedded-systems suppliers selling to the same customers they were ten years ago? And do they expect to be selling to the same customers ten years from now?

Atmel’s Jay Johnson predictably replied with, “yes and no.” In his case, companies that laid off ASIC designers are now gravitating toward FPGAs and other programmable products like Atmel’s. “Kids out of [engineering] school are taught that FPGAs are the way to customize.” 

He also noted a shift in the customer base over the last decade. Most of Atmel’s programmable chips used to go into automotive and industrial applications. Now it’s mostly consumer electronics. The switch happened about three years ago and seems permanent. Short consumer design cycles and an urgent need to differentiate products have led designers to rely on programmable logic for an edge.

CebaTech’s Ramana said his customer demographics haven’t changed over time. “We appeal mainly to software companies without a lot of hardware-design expertise, but who need a lot of acceleration.”

ARM is still doing pretty well in mobile applications, said the company’s Ian Ferguson. But he also noted that power consumption is more important to more people than before, and not just in battery-powered products. That’s partly what led ARM to break out its processor product line into three separate Cortex-A, –R, and –M families.

And in the end…

Although there was a log of agreement among our group of experts, they often agreed to disagree. There appears to be no one solution to the “multicore problem,” and it’s been interesting to watch customers feel their own way through the darkness. Some embrace it wholeheartedly; others are in denial. Many take the pragmatic course of modifying as little of their hardware and software as possible to eke out an incremental improvement in performance. Everyone looks to everyone else for the “right” approach, or waits for the industry at large to deliver a breakthrough that’ll make all this complexity go away.

As much as we might like the fairy tale of the white knight riding to our rescue, that’s not likely to happen. Engineers will do what engineers have always done: slog through the muddy waters of design decisions, picking the path that’s best (or least miserable) for their particular problem. 

Even if there were an ideal engineering solution – even if we could somehow prove that it was the best, most power-efficient, cheapest, fastest way – we’d still have different engineers following completely different approaches. That’s’ just the perversity of engineering. And if history is any indication, the worst one will be the most successful.

Which is cool; that’s what makes engineering fun.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

Intel AI Update
Sponsored by Mouser Electronics and Intel
In this episode of Chalk Talk, Amelia Dalton and Peter Tea from Intel explore how Intel is making AI implementation easier than ever before. They examine the typical workflows involved in artificial intelligence designs, the benefits that Intel’s scalable Xeon processor brings to AI projects, and how you can take advantage of the Intel AI ecosystem to further innovation in your next design.
Oct 6, 2023
35,422 views