feature article
Subscribe Now

Whither Embedded

Part Deux

In this second installment of our big embedded-executive brain dump, we delve a bit more into the pros and cons of multicore processors and multiprocessor systems, but also take a look at programming languages and customer trends. Here’s what’s happening. 

All four participants agreed that the first multicore project is the hardest. “Moving from one core to two cores is the biggest step,” said CriticalBlue’s CEO David Stewart. “After that, four, eight, even thirty-two cores is a smaller step.”

Ramana Jampala of CebaTech echoed a similar sentiment. Design engineers “have to prepare; it’s a planning process. Programmers have to plan the migration of their software.” Educating – or reeducating – programmers won’t happen overnight, but it’s mandatory for effective multicore programming. Or even to keep one’s head above water.

One member felt that “each industry segment will get there at its own rate. Networking is already there. Handsets are almost there. Other markets, like automotive and industrial, will take longer.”  

Because of the training and inertia involved, programmers may sidestep multicore programming and instead throw hardware at the problem. Ramana gave an example: “If my customer says to me, ‘Make it go faster, make it more efficient,’ would I risk my sequential software and existing hardware tools for parallelization, or would I try to push the boulder uphill by trying to make this software multithreaded?” In his view, it’s easier to take advantage of hardware accelerators and other specialized offloads than to try to integrate and program more processors.

He gave another example of a networking customer struggling with a high-end Intel “Nehalem” (Core i7) processor. “He could barely get 500–600 Mbps. But teamed with hardware offload in the form of an FPGA, we were able to identify parallelization using automated tools and produce hardware at one-third to one-fourth the cost of the Intel CPU, and 5x better performance. I’m not saying multicore or multithreaded software couldn’t have done the job,” he added, “but the amount of effort involved wouldn’t have been worth it.”

As complex as they are, demand for multicore processors keeps rising. ARM’s Ian Ferguson mentioned one customer (also in the networking business) who is “putting down eight Cortex-A9’s in a base station, in two clusters of four.”

Customers want it, he insists, and ARM is accommodating their wishes. “We made some basic assumptions about how SMP [symmetric multiprocessing] would be instantiated… we’ve never really been in network infrastructure.” In designing the Cortex-A9, ARM assumed that a four-way cluster would be sufficient and designed the core with this in mind. Now that at least one customer is clustering double that number, the company may need to rethink that limitation.

Are we as humans multicore capable?

There’s a school of thought that the problem with multicore programming isn’t the chips or the compilers – it’s us. As humans, we’re simply not built to grok multicore programming. Our thought processes are inherently serial, not parallel (so the reasoning goes), so we’re congenitally ill-suited to the task. If that’s so, is there anything we can do about it?

Perhaps a new programming language is required. Or even a whole new programming paradigm that uses symbols (for example) or flowcharts or schematics. There’s been no lack of effort in this area, all the way from academic research teams to unemployed college students with time on their hands. Why has (almost) no one adopted these new languages?

David Stewart replied, “That’s not how people work. We’ve always been through evolutions, and probably always will.” The rest of the group agreed. There’s never a good point when everyone can change. Working engineers have to get tomorrow’s product out the door, so there’s never a convenient two-year gap to relearn everything. We instead make incremental improvements, precluding any wholesale changes in our programming habits or mindsets. As academically attractive as a new programming paradigm might be, it’s impractical. Like any language, to be of use requires that people actually adopt it and use it.

It’s ten years later; do you know where your customers are?

Toward the end of our discussion we talked about changes in the customer base. Are embedded-systems suppliers selling to the same customers they were ten years ago? And do they expect to be selling to the same customers ten years from now?

Atmel’s Jay Johnson predictably replied with, “yes and no.” In his case, companies that laid off ASIC designers are now gravitating toward FPGAs and other programmable products like Atmel’s. “Kids out of [engineering] school are taught that FPGAs are the way to customize.” 

He also noted a shift in the customer base over the last decade. Most of Atmel’s programmable chips used to go into automotive and industrial applications. Now it’s mostly consumer electronics. The switch happened about three years ago and seems permanent. Short consumer design cycles and an urgent need to differentiate products have led designers to rely on programmable logic for an edge.

CebaTech’s Ramana said his customer demographics haven’t changed over time. “We appeal mainly to software companies without a lot of hardware-design expertise, but who need a lot of acceleration.”

ARM is still doing pretty well in mobile applications, said the company’s Ian Ferguson. But he also noted that power consumption is more important to more people than before, and not just in battery-powered products. That’s partly what led ARM to break out its processor product line into three separate Cortex-A, –R, and –M families.

And in the end…

Although there was a log of agreement among our group of experts, they often agreed to disagree. There appears to be no one solution to the “multicore problem,” and it’s been interesting to watch customers feel their own way through the darkness. Some embrace it wholeheartedly; others are in denial. Many take the pragmatic course of modifying as little of their hardware and software as possible to eke out an incremental improvement in performance. Everyone looks to everyone else for the “right” approach, or waits for the industry at large to deliver a breakthrough that’ll make all this complexity go away.

As much as we might like the fairy tale of the white knight riding to our rescue, that’s not likely to happen. Engineers will do what engineers have always done: slog through the muddy waters of design decisions, picking the path that’s best (or least miserable) for their particular problem. 

Even if there were an ideal engineering solution – even if we could somehow prove that it was the best, most power-efficient, cheapest, fastest way – we’d still have different engineers following completely different approaches. That’s’ just the perversity of engineering. And if history is any indication, the worst one will be the most successful.

Which is cool; that’s what makes engineering fun.

Leave a Reply

featured blogs
May 14, 2021
Another Friday, another week chock full of CFD, CAE, and CAD news. This week features a topic near and dear to my heart involving death of the rainbow color map for displaying simulation results.... [[ Click on the title to access the full blog on the Cadence Community site....
May 13, 2021
Samtec will attend the PCI-SIG Virtual Developers Conference on Tuesday, May 25th through Wednesday, May 26th, 2021. This is a free event for the 800+ member companies that develop and bring to market new products utilizing PCI Express technology. Attendee Registration is sti...
May 13, 2021
Our new IC design tool, PrimeSim Continuum, enables the next generation of hyper-convergent IC designs. Learn more from eeNews, Electronic Design & EE Times. The post Synopsys Makes Headlines with PrimeSim Continuum, an Innovative Circuit Simulation Solution appeared fi...
May 13, 2021
By Calibre Design Staff Prior to the availability of extreme ultraviolet (EUV) lithography, multi-patterning provided… The post A SAMPle of what you need to know about SAMP technology appeared first on Design with Calibre....

featured video

Insights on StarRC Standalone Netlist Reducer

Sponsored by Synopsys

With the ever-growing size of extracted netlists, parasitic optimization is key to achieve practical simulation run times. Key trade-off for any netlist reducer is accuracy vs netlist size. StarRC Standalone Netlist reducer provides the flexibility to optimize your netlist on a per net basis. The user has total control of trading accuracy of some nets versus netlist optimization - yet another feature from StarRC to provide flexibility to the designer.

Click here for more information

featured paper

Keys to quick success using high-speed data converters

Sponsored by Texas Instruments

Hardware designers using modern high-speed data converters face tough challenges. Issues might include connecting with your field-programmable gate array (FPGAs), being confident that your first design pass will work, or determining how to best model the system before building it. In this article, Texas Instruments takes a closer look at each of these challenges.

Click to read more

featured chalk talk

Single Pair Ethernet

Sponsored by Mouser Electronics and Phoenix Contact

Single-pair Ethernet is revolutionizing industrial system design, with new levels of performance and simplicity. But, before you make the jump, you need to understand the options for cables, connectors, and other infrastructure. In this episode of Chalk Talk, Amelia Dalton chats with Lyndsey Walling of Phoenix Contact about the latest in single-pair Ethernet for industrial applications.

Click here for more information about Phoenix Contact Single Pair Ethernet (SPE) Connectors