feature article
Subscribe Now

Freescale Goes Multi-Core

Comprehensive Roadmap Shows Strategy

The math is simple.  When the amount of power required to double the speed of one processor far exceeds the amount of power required for two processors, it’s time to be thinking about multi-core.  This idea has been around for at least thirty years.  We all knew it would eventually happen.  Supercomputing was probably the first to fall.  The giant monolithic supercomputer was rendered obsolete by massively parallel processing arrays years ago.  In the desktop computing world, we have made the jump from single core to multi-core for high-end machines within the last two years.  For the embedded systems world, it appears that the time is now.

Freescale announced an expansive and comprehensive multi-core strategy this week.  The strategy includes a description of the company’s new multi-core microarchitecture, a simulation/virtual platform environment to support software development, and a process announcement that the new technology will be implemented in 45nm silicon. 

Multi-core poses significant challenges on both the hardware and the software side.  The fundamental challenge in hardware is to try to make one plus one equal something near two.  If your effective available processing power doesn’t scale reasonably well with the increasing number of processor cores, our simple math equation (the one that justified multi-core in the first place) starts to break down. 

There are a number of hardware bottlenecks that can interfere with the smooth scaling of performance in multi-core systems.  The first of these is the interconnect between processing elements (and between processing elements and the rest of the world).  Traditional bus-based systems start to clog up when more than one processor is feeding from the trough.  To address this problem, Freescale has switched to a scalable switching fabric called “CoreNet” on-chip.  This fabric eliminates bus contention issues and can support the much higher bandwidth required to keep potentially more than 32 cores operating smoothly.

When you get more than one processor fighting for resources, caching is also a tricky proposition.  Freescale’s new multi-core microarchitecture sports a tri-level cache hierarchy with back-side L2 caches, multiple L3 shared caches, and multiple memory controllers.  There is also a dedicated architecture to support on-demand application acceleration by allowing higher-performance, lower-power hardware implementation of a variety of specialized algorithms for tasks like pattern matching, compression/decompression, crypto security, table lookups, and datapath resource management.  The multi-core microarchitecture is independent of any specific choice of processor architecture and can handle a combination of homogeneous and heterogeneous processor cores and specialized hardware accelerators.   The platform will use Freescale’s PowerArchitecture cores. 

A third major problem with extracting the potential of multi-core processing is making software compatible with multi-core semantics.  For decades, we’ve trained programmers to think sequentially, breaking complex processes down into sets of ordered steps.  Now, with a plurality of processors available, that traditional thinking is counter-productive.  The current software development model is far behind the curve on taking advantage of the inherent parallelism available in multi-core environments.  From the training of software engineers to the structure of programming languages to the design of compilers and operating systems, sequential assumptions are deeply embedded.  In addition, billions of lines of legacy sequential software are already tested and waiting to be accelerated by modern multi-core technology.

Over time, we need to evolve the programming model to account for multi-core processing in a more reasonable way.  We also need to develop improved technology for running legacy applications efficiently on multi-core processors, handling issues such as load balancing and ambiguity in the availability of processing elements.  While this overhaul will obviously take years or decades to complete, there are a number of good first steps already behind us.

One of those first steps is the multi-core development environment Freescale is announcing in conjunction with their rollout.  Recognizing that embedded development teams will need some time to get their software ready far in advance of hardware delivery, they have worked with Virtutech (whose Simics virtual platform we’ve covered in previous articles) to create a virtual platform for the new offering.  Virtualization allows development of software to proceed independent of hardware and provides a level of debug and analysis visibility not possible in a pure hardware environment.  The Virtutech system will allow a mixture of functional models and cycle-accurate models on the hardware side, allowing you to trade off between simulation performance and accuracy, and facilitating the assessment of multi-core performance on particular software.

Multi-core devices from Freescale will not be available until 2008, owing in part to their decision to base the platform on 45nm silicon-on-insulator (SOI) technology.  The company expects to see significant dynamic power reduction (estimated at 50%), performance increase, and cost reduction (50% die-size reduction) compared to 90nm implementations.  Freescale expects the new platform to net a 4X performance increase over their previous offerings.  The 45nm technology is based on an ongoing collaboration with IBM Alliance and has a roadmap to 32nm and 22nm in place.

With Freescale’s announcement, a plethora of partners are joining the game – Wind River, MontaVista, and Green Hills have all announced software support and endorsement of the new platform, Virtutech has announced the Simics collaboration with Freescale, and a host of other vendors are likely to chime in with their own announcements as the new platform comes to market. 

The hybrid simulation environment that supports the new multi-core platform will be available starting in Q4 2007, and the first devices are expected to hit the market in late 2008.  The MPC8572 and corresponding simulation model (which “closely mirrors” the first multi-core platform implementations) are available today.

While availability of most of the actual products announced today is frustratingly far away, the scope of the announcement and the program show that Freescale is committed to multi-core as the future direction of the embedded industry.  With consumers demanding devices with unprecedented levels of performance, communication, and integration, it is likely that even run-of-the-mill embedded applications will require multi-core performance in the near future.  Given the impact of the advent of multi-core on the engineering community that must deploy these devices, starting with an architecture announcement now is a prudent first step.

Leave a Reply

featured blogs
Sep 21, 2021
Placing component leads accurately as per the datasheet is an important task while creating a package footprint symbol. As the pin pitch goes down, the size and location of the component lead play a... [[ Click on the title to access the full blog on the Cadence Community si...
Sep 21, 2021
Learn how our high-performance FPGA prototyping tools enable RTL debug for chip validation teams, eliminating simulation/emulation during hardware debugging. The post High Debug Productivity Is the FPGA Prototyping Game Changer: Part 1 appeared first on From Silicon To Softw...
Sep 18, 2021
Projects with a steampunk look-and-feel incorporate retro-futuristic technology and aesthetics inspired by 19th-century industrial steam-powered machinery....
Aug 5, 2021
Megh Computing's Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization   Because Megh's ...

featured video

Silicon Lifecycle Management Paradigm Shift

Sponsored by Synopsys

An end-to-end platform solution, Silicon Lifecycle Management leverages existing, mature, world-class technologies within Synopsys. This exciting new concept will revolutionize the semiconductor industry and how we manage silicon design. For the first time, designers can look inside silicon chip devices from the moment the design is created to the point at which they end their life.

Click here to learn more about Silicon Lifecycle Management

featured paper

An Engineer's Guide to Designing with Precision Amplifiers

Sponsored by Texas Instruments

This e-book contains years of circuit design recommendations and insights from Texas Instruments industry experts and covers many common topics and questions you may encounter while designing with precision amplifiers.

Click to read more

featured chalk talk

Using the Graphical PMSM FOC Component in Harmony3

Sponsored by Mouser Electronics and Microchip

Developing embedded software, and particularly configuring your embedded system can be a major pain for development engineers. Getting all the drivers, middleware, and libraries you need set up and in the right place and working is a constant source of frustration. In this episode of Chak Talk, Amelia Dalton chats with Brett Novak of Microchip about Microchip’s MPLAB Harmony 3, with the MPLAB Harmony Configurator - an embedded development framework with a drag-and-drop GUI that makes configuration a snap.

Click here for more information about Microchip Technology MPLAB® X Integrated Development Environment (IDE)