feature article
Subscribe Now

Freescale Goes Multi-Core

Comprehensive Roadmap Shows Strategy

The math is simple.  When the amount of power required to double the speed of one processor far exceeds the amount of power required for two processors, it’s time to be thinking about multi-core.  This idea has been around for at least thirty years.  We all knew it would eventually happen.  Supercomputing was probably the first to fall.  The giant monolithic supercomputer was rendered obsolete by massively parallel processing arrays years ago.  In the desktop computing world, we have made the jump from single core to multi-core for high-end machines within the last two years.  For the embedded systems world, it appears that the time is now.

Freescale announced an expansive and comprehensive multi-core strategy this week.  The strategy includes a description of the company’s new multi-core microarchitecture, a simulation/virtual platform environment to support software development, and a process announcement that the new technology will be implemented in 45nm silicon. 

Multi-core poses significant challenges on both the hardware and the software side.  The fundamental challenge in hardware is to try to make one plus one equal something near two.  If your effective available processing power doesn’t scale reasonably well with the increasing number of processor cores, our simple math equation (the one that justified multi-core in the first place) starts to break down. 

There are a number of hardware bottlenecks that can interfere with the smooth scaling of performance in multi-core systems.  The first of these is the interconnect between processing elements (and between processing elements and the rest of the world).  Traditional bus-based systems start to clog up when more than one processor is feeding from the trough.  To address this problem, Freescale has switched to a scalable switching fabric called “CoreNet” on-chip.  This fabric eliminates bus contention issues and can support the much higher bandwidth required to keep potentially more than 32 cores operating smoothly.

When you get more than one processor fighting for resources, caching is also a tricky proposition.  Freescale’s new multi-core microarchitecture sports a tri-level cache hierarchy with back-side L2 caches, multiple L3 shared caches, and multiple memory controllers.  There is also a dedicated architecture to support on-demand application acceleration by allowing higher-performance, lower-power hardware implementation of a variety of specialized algorithms for tasks like pattern matching, compression/decompression, crypto security, table lookups, and datapath resource management.  The multi-core microarchitecture is independent of any specific choice of processor architecture and can handle a combination of homogeneous and heterogeneous processor cores and specialized hardware accelerators.   The platform will use Freescale’s PowerArchitecture cores. 

A third major problem with extracting the potential of multi-core processing is making software compatible with multi-core semantics.  For decades, we’ve trained programmers to think sequentially, breaking complex processes down into sets of ordered steps.  Now, with a plurality of processors available, that traditional thinking is counter-productive.  The current software development model is far behind the curve on taking advantage of the inherent parallelism available in multi-core environments.  From the training of software engineers to the structure of programming languages to the design of compilers and operating systems, sequential assumptions are deeply embedded.  In addition, billions of lines of legacy sequential software are already tested and waiting to be accelerated by modern multi-core technology.

Over time, we need to evolve the programming model to account for multi-core processing in a more reasonable way.  We also need to develop improved technology for running legacy applications efficiently on multi-core processors, handling issues such as load balancing and ambiguity in the availability of processing elements.  While this overhaul will obviously take years or decades to complete, there are a number of good first steps already behind us.

One of those first steps is the multi-core development environment Freescale is announcing in conjunction with their rollout.  Recognizing that embedded development teams will need some time to get their software ready far in advance of hardware delivery, they have worked with Virtutech (whose Simics virtual platform we’ve covered in previous articles) to create a virtual platform for the new offering.  Virtualization allows development of software to proceed independent of hardware and provides a level of debug and analysis visibility not possible in a pure hardware environment.  The Virtutech system will allow a mixture of functional models and cycle-accurate models on the hardware side, allowing you to trade off between simulation performance and accuracy, and facilitating the assessment of multi-core performance on particular software.

Multi-core devices from Freescale will not be available until 2008, owing in part to their decision to base the platform on 45nm silicon-on-insulator (SOI) technology.  The company expects to see significant dynamic power reduction (estimated at 50%), performance increase, and cost reduction (50% die-size reduction) compared to 90nm implementations.  Freescale expects the new platform to net a 4X performance increase over their previous offerings.  The 45nm technology is based on an ongoing collaboration with IBM Alliance and has a roadmap to 32nm and 22nm in place.

With Freescale’s announcement, a plethora of partners are joining the game – Wind River, MontaVista, and Green Hills have all announced software support and endorsement of the new platform, Virtutech has announced the Simics collaboration with Freescale, and a host of other vendors are likely to chime in with their own announcements as the new platform comes to market. 

The hybrid simulation environment that supports the new multi-core platform will be available starting in Q4 2007, and the first devices are expected to hit the market in late 2008.  The MPC8572 and corresponding simulation model (which “closely mirrors” the first multi-core platform implementations) are available today.

While availability of most of the actual products announced today is frustratingly far away, the scope of the announcement and the program show that Freescale is committed to multi-core as the future direction of the embedded industry.  With consumers demanding devices with unprecedented levels of performance, communication, and integration, it is likely that even run-of-the-mill embedded applications will require multi-core performance in the near future.  Given the impact of the advent of multi-core on the engineering community that must deploy these devices, starting with an architecture announcement now is a prudent first step.

Leave a Reply

featured blogs
Nov 25, 2020
It constantly amazes me how there are always multiple ways of doing things. The problem is that sometimes it'€™s hard to decide which option is best....
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...
Nov 25, 2020
It might seem simple, but database units and accuracy directly relate to the artwork generated, and it is possible to misunderstand the artwork format as it relates to the board setup. Thirty years... [[ Click on the title to access the full blog on the Cadence Community sit...
Nov 23, 2020
Readers of the Samtec blog know we are always talking about next-gen speed. Current channels rates are running at 56 Gbps PAM4. However, system designers are starting to look at 112 Gbps PAM4 data rates. Intuition would say that bleeding edge data rates like 112 Gbps PAM4 onl...

featured video

Accelerate Automotive Certification with Synopsys Functional Safety Test Solution

Sponsored by Synopsys

With the Synopsys Functional Safety Test Solution architecture, designers of automotive SoCs can integrate an automated, end-to-end BIST solution to accelerate ISO compliance and time-to-market.

Click here for more information about Embedded Test & Repair

featured paper

Keys to quick success using high-speed data converters

Sponsored by Texas Instruments

Whether you’re designing an aerospace system, test and measurement equipment or automotive lidar AFE, hardware designers using high-speed data converters face tough challenges with high-frequency inputs, outputs, clock rates and digital interface. Issues might include connecting with your field-programmable gate array, being confident that your first design pass will work or determining how to best model the system before building it. In this article, we take a look at each of these challenges.

Click here to download the whitepaper

Featured Chalk Talk

Accelerate the Integration of Power Conversion with microBUCK® and microBRICK™

Sponsored by Mouser Electronics and Vishay

In the world of power conversion, multi-chip packaging, thermal performance, and power density can make all of the difference in the success of your next design. In this episode of Chalk Talk, Amelia Dalton chats with Raymond Jiang about the trends and challenges in power delivery and how you can leverage the unique combination of discrete MOSFET design, IC expertise, and packaging capability of Vishay’s microBRICK™and microBUCK® integrated voltage regulators.

Click here for more information about Vishay microBUCK® and microBRICK™ DC/DC Regulators