feature article
Subscribe Now

Freescale Goes Multi-Core

Comprehensive Roadmap Shows Strategy

The math is simple.  When the amount of power required to double the speed of one processor far exceeds the amount of power required for two processors, it’s time to be thinking about multi-core.  This idea has been around for at least thirty years.  We all knew it would eventually happen.  Supercomputing was probably the first to fall.  The giant monolithic supercomputer was rendered obsolete by massively parallel processing arrays years ago.  In the desktop computing world, we have made the jump from single core to multi-core for high-end machines within the last two years.  For the embedded systems world, it appears that the time is now.

Freescale announced an expansive and comprehensive multi-core strategy this week.  The strategy includes a description of the company’s new multi-core microarchitecture, a simulation/virtual platform environment to support software development, and a process announcement that the new technology will be implemented in 45nm silicon. 

Multi-core poses significant challenges on both the hardware and the software side.  The fundamental challenge in hardware is to try to make one plus one equal something near two.  If your effective available processing power doesn’t scale reasonably well with the increasing number of processor cores, our simple math equation (the one that justified multi-core in the first place) starts to break down. 

There are a number of hardware bottlenecks that can interfere with the smooth scaling of performance in multi-core systems.  The first of these is the interconnect between processing elements (and between processing elements and the rest of the world).  Traditional bus-based systems start to clog up when more than one processor is feeding from the trough.  To address this problem, Freescale has switched to a scalable switching fabric called “CoreNet” on-chip.  This fabric eliminates bus contention issues and can support the much higher bandwidth required to keep potentially more than 32 cores operating smoothly.

When you get more than one processor fighting for resources, caching is also a tricky proposition.  Freescale’s new multi-core microarchitecture sports a tri-level cache hierarchy with back-side L2 caches, multiple L3 shared caches, and multiple memory controllers.  There is also a dedicated architecture to support on-demand application acceleration by allowing higher-performance, lower-power hardware implementation of a variety of specialized algorithms for tasks like pattern matching, compression/decompression, crypto security, table lookups, and datapath resource management.  The multi-core microarchitecture is independent of any specific choice of processor architecture and can handle a combination of homogeneous and heterogeneous processor cores and specialized hardware accelerators.   The platform will use Freescale’s PowerArchitecture cores. 

A third major problem with extracting the potential of multi-core processing is making software compatible with multi-core semantics.  For decades, we’ve trained programmers to think sequentially, breaking complex processes down into sets of ordered steps.  Now, with a plurality of processors available, that traditional thinking is counter-productive.  The current software development model is far behind the curve on taking advantage of the inherent parallelism available in multi-core environments.  From the training of software engineers to the structure of programming languages to the design of compilers and operating systems, sequential assumptions are deeply embedded.  In addition, billions of lines of legacy sequential software are already tested and waiting to be accelerated by modern multi-core technology.

Over time, we need to evolve the programming model to account for multi-core processing in a more reasonable way.  We also need to develop improved technology for running legacy applications efficiently on multi-core processors, handling issues such as load balancing and ambiguity in the availability of processing elements.  While this overhaul will obviously take years or decades to complete, there are a number of good first steps already behind us.

One of those first steps is the multi-core development environment Freescale is announcing in conjunction with their rollout.  Recognizing that embedded development teams will need some time to get their software ready far in advance of hardware delivery, they have worked with Virtutech (whose Simics virtual platform we’ve covered in previous articles) to create a virtual platform for the new offering.  Virtualization allows development of software to proceed independent of hardware and provides a level of debug and analysis visibility not possible in a pure hardware environment.  The Virtutech system will allow a mixture of functional models and cycle-accurate models on the hardware side, allowing you to trade off between simulation performance and accuracy, and facilitating the assessment of multi-core performance on particular software.

Multi-core devices from Freescale will not be available until 2008, owing in part to their decision to base the platform on 45nm silicon-on-insulator (SOI) technology.  The company expects to see significant dynamic power reduction (estimated at 50%), performance increase, and cost reduction (50% die-size reduction) compared to 90nm implementations.  Freescale expects the new platform to net a 4X performance increase over their previous offerings.  The 45nm technology is based on an ongoing collaboration with IBM Alliance and has a roadmap to 32nm and 22nm in place.

With Freescale’s announcement, a plethora of partners are joining the game – Wind River, MontaVista, and Green Hills have all announced software support and endorsement of the new platform, Virtutech has announced the Simics collaboration with Freescale, and a host of other vendors are likely to chime in with their own announcements as the new platform comes to market. 

The hybrid simulation environment that supports the new multi-core platform will be available starting in Q4 2007, and the first devices are expected to hit the market in late 2008.  The MPC8572 and corresponding simulation model (which “closely mirrors” the first multi-core platform implementations) are available today.

While availability of most of the actual products announced today is frustratingly far away, the scope of the announcement and the program show that Freescale is committed to multi-core as the future direction of the embedded industry.  With consumers demanding devices with unprecedented levels of performance, communication, and integration, it is likely that even run-of-the-mill embedded applications will require multi-core performance in the near future.  Given the impact of the advent of multi-core on the engineering community that must deploy these devices, starting with an architecture announcement now is a prudent first step.

Leave a Reply

featured blogs
Apr 12, 2024
Like any software application or electronic gadget, software updates are crucial for Cadence OrCAD X and Allegro X applications as well. These software updates, often referred to as hotfixes, include support for new features and critical bug fixes made available to the users ...
Apr 11, 2024
See how Achronix used our physical verification tools to accelerate the SoC design and verification flow, boosting chip design productivity w/ cloud-based EDA.The post Achronix Achieves 5X Faster Physical Verification for Full SoC Within Budget with Synopsys Cloud appeared ...
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Package Evolution for MOSFETs and Diodes
Sponsored by Mouser Electronics and Vishay
A limiting factor for both MOSFETs and diodes is power dissipation per unit area and your choice of packaging can make a big difference in power dissipation. In this episode of Chalk Talk, Amelia Dalton and Brian Zachrel from Vishay investigate how package evolution has led to new advancements in diodes and MOSFETs including minimizing package resistance, increasing power density, and more! They also explore the benefits of using Vishay’s small and efficient PowerPAK® and eSMP® packages and the migration path you will need to keep in mind when using these solutions in your next design.
Jul 10, 2023
31,127 views