feature article
Subscribe Now

Whither Embedded

Part Deux

In this second installment of our big embedded-executive brain dump, we delve a bit more into the pros and cons of multicore processors and multiprocessor systems, but also take a look at programming languages and customer trends. Here’s what’s happening. 

All four participants agreed that the first multicore project is the hardest. “Moving from one core to two cores is the biggest step,” said CriticalBlue’s CEO David Stewart. “After that, four, eight, even thirty-two cores is a smaller step.”

Ramana Jampala of CebaTech echoed a similar sentiment. Design engineers “have to prepare; it’s a planning process. Programmers have to plan the migration of their software.” Educating – or reeducating – programmers won’t happen overnight, but it’s mandatory for effective multicore programming. Or even to keep one’s head above water.

One member felt that “each industry segment will get there at its own rate. Networking is already there. Handsets are almost there. Other markets, like automotive and industrial, will take longer.”  

Because of the training and inertia involved, programmers may sidestep multicore programming and instead throw hardware at the problem. Ramana gave an example: “If my customer says to me, ‘Make it go faster, make it more efficient,’ would I risk my sequential software and existing hardware tools for parallelization, or would I try to push the boulder uphill by trying to make this software multithreaded?” In his view, it’s easier to take advantage of hardware accelerators and other specialized offloads than to try to integrate and program more processors.

He gave another example of a networking customer struggling with a high-end Intel “Nehalem” (Core i7) processor. “He could barely get 500–600 Mbps. But teamed with hardware offload in the form of an FPGA, we were able to identify parallelization using automated tools and produce hardware at one-third to one-fourth the cost of the Intel CPU, and 5x better performance. I’m not saying multicore or multithreaded software couldn’t have done the job,” he added, “but the amount of effort involved wouldn’t have been worth it.”

As complex as they are, demand for multicore processors keeps rising. ARM’s Ian Ferguson mentioned one customer (also in the networking business) who is “putting down eight Cortex-A9’s in a base station, in two clusters of four.”

Customers want it, he insists, and ARM is accommodating their wishes. “We made some basic assumptions about how SMP [symmetric multiprocessing] would be instantiated… we’ve never really been in network infrastructure.” In designing the Cortex-A9, ARM assumed that a four-way cluster would be sufficient and designed the core with this in mind. Now that at least one customer is clustering double that number, the company may need to rethink that limitation.

Are we as humans multicore capable?

There’s a school of thought that the problem with multicore programming isn’t the chips or the compilers – it’s us. As humans, we’re simply not built to grok multicore programming. Our thought processes are inherently serial, not parallel (so the reasoning goes), so we’re congenitally ill-suited to the task. If that’s so, is there anything we can do about it?

Perhaps a new programming language is required. Or even a whole new programming paradigm that uses symbols (for example) or flowcharts or schematics. There’s been no lack of effort in this area, all the way from academic research teams to unemployed college students with time on their hands. Why has (almost) no one adopted these new languages?

David Stewart replied, “That’s not how people work. We’ve always been through evolutions, and probably always will.” The rest of the group agreed. There’s never a good point when everyone can change. Working engineers have to get tomorrow’s product out the door, so there’s never a convenient two-year gap to relearn everything. We instead make incremental improvements, precluding any wholesale changes in our programming habits or mindsets. As academically attractive as a new programming paradigm might be, it’s impractical. Like any language, to be of use requires that people actually adopt it and use it.

It’s ten years later; do you know where your customers are?

Toward the end of our discussion we talked about changes in the customer base. Are embedded-systems suppliers selling to the same customers they were ten years ago? And do they expect to be selling to the same customers ten years from now?

Atmel’s Jay Johnson predictably replied with, “yes and no.” In his case, companies that laid off ASIC designers are now gravitating toward FPGAs and other programmable products like Atmel’s. “Kids out of [engineering] school are taught that FPGAs are the way to customize.” 

He also noted a shift in the customer base over the last decade. Most of Atmel’s programmable chips used to go into automotive and industrial applications. Now it’s mostly consumer electronics. The switch happened about three years ago and seems permanent. Short consumer design cycles and an urgent need to differentiate products have led designers to rely on programmable logic for an edge.

CebaTech’s Ramana said his customer demographics haven’t changed over time. “We appeal mainly to software companies without a lot of hardware-design expertise, but who need a lot of acceleration.”

ARM is still doing pretty well in mobile applications, said the company’s Ian Ferguson. But he also noted that power consumption is more important to more people than before, and not just in battery-powered products. That’s partly what led ARM to break out its processor product line into three separate Cortex-A, –R, and –M families.

And in the end…

Although there was a log of agreement among our group of experts, they often agreed to disagree. There appears to be no one solution to the “multicore problem,” and it’s been interesting to watch customers feel their own way through the darkness. Some embrace it wholeheartedly; others are in denial. Many take the pragmatic course of modifying as little of their hardware and software as possible to eke out an incremental improvement in performance. Everyone looks to everyone else for the “right” approach, or waits for the industry at large to deliver a breakthrough that’ll make all this complexity go away.

As much as we might like the fairy tale of the white knight riding to our rescue, that’s not likely to happen. Engineers will do what engineers have always done: slog through the muddy waters of design decisions, picking the path that’s best (or least miserable) for their particular problem. 

Even if there were an ideal engineering solution – even if we could somehow prove that it was the best, most power-efficient, cheapest, fastest way – we’d still have different engineers following completely different approaches. That’s’ just the perversity of engineering. And if history is any indication, the worst one will be the most successful.

Which is cool; that’s what makes engineering fun.

Leave a Reply

featured blogs
May 17, 2022
'Virtuoso Meets Maxwell' is a blog series aimed at exploring the capabilities and potential of Virtuoso® RF Solution and Virtuoso MultiTech. So, how does Virtuoso meet Maxwell? Now,... ...
May 17, 2022
Explore Arm's SystemReady program, and learn how we're simplifying hardware/software compliance through pre-silicon testing for Base System Architecture (BSA). The post Collaborating to Ensure that Software Just Works Across Arm-Based Hardware appeared first on From Silicon ...
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...
Apr 29, 2022
What do you do if someone starts waving furiously at you, seemingly delighted to see you, but you fear they are being overenthusiastic?...

featured video

Building safer robots with computer vision & AI

Sponsored by Texas Instruments

Watch TI's demo to see how Jacinto™ 7 processors fuse deep learning and traditional computer vision to enable safer autonomous mobile robots.

Watch demo

featured paper

5 common Hall-effect sensor myths

Sponsored by Texas Instruments

Hall-effect sensors can be used in a variety of automotive and industrial systems. Higher system performance requirements created the need for improved accuracy and more integration – extending the use of Hall-effect sensors. Read this article to learn about common Hall-effect sensor misconceptions and see how these sensors can be used in real-world applications.

Click to read more

featured chalk talk

The Composite Power Inductance Story

Sponsored by Mouser Electronics and Vishay

Power inductor technology has made a huge difference in the evolution of our electronic system designs. In this episode of Chalk Talk, Amelia Dalton chats with Tim Shafer from Vishay about the history of power inductor technology, how Vishay developed the most compact and efficient power inductor on the market today and why Vishay’s extensive portfolio of composite power inductors might be the best solution for your next embedded system design.

Click here for more information about Vishay Inductors