feature article
Subscribe Now

NXP’s Multicore Goes Micro

Most of us think of multicore processors as a big deal. It’s a high-end architectural trick, designed to get maximum performance from PC and workstation processors. Multicore is big league. Multicore is complicated. Multicore is expensive.

NXP has turned that idea on its head with a line of wee microcontrollers that start at just $2. The company’s LPC4300 family is certainly multicore—it’s even heterogeneous, if you’re into that—but it’s multicore writ small. Mighty mites in the service of mankind, you might say.

There are four members of this new LPC4300 family, and, seemingly like every other new processor these days, they’re ARM-based. All four pair an ARM Cortex-M4 with a Cortex-M0. Both CPUs run at the same speed, but one is dedicated to running your code while the other devotes its full attention to the on-chip I/O. Yes, we now have low-cost microcontrollers with their own onboard I/O coprocessor.

That’s not surprising, given the amount of I/O NXP put on these chips. That little M0 is going to be busy. For starters, there’s the standard suite of UARTs, timers, RTC, interrupt controllers, JTAG, quad SPI, CAN, I2C, PWM, quadrature encoder input, DAC and ADC, AES decryption—the list goes on and on. Even in their 144- or 256-pin packages, the peripherals on these chips are pin-limited.

It’s the job of the Cortex-M0 to babysit all the on-chip peripheral devices so that your M4 doesn’t have to. That’s one benefit. The other is that the M0 has a bit of horsepower left over to massage the I/O data before it reaches the main processor, so it can essentially “flavor enhance” the peripherals with some intelligence they wouldn’t otherwise have. In extreme cases, you can even bit-bang the I/O pins directly, creating your own software-defined peripherals.

This isn’t as silly as it sounds. Ubicom makes a line of nice little communications controllers that operate on exactly that principle. Ubicom’s current chips are an extension of its earlier Scenix SX microcontrollers, which used a tiny multithreaded CPU to emulate hardware peripherals in software. From the outside, they looked like standard peripheral devices. On the inside, there was just a microcontroller whirling away frantically, toggling I/O lines under real-time software control. Pretty clever.

NXP allows you to do something similar, adding your own filtering, driver intelligence, watch points, or whatever you’d like. Since it’s a Cortex-M0 and not some mysterious proprietary peripheral processor, it’s pretty easy to program, too. NXP makes no secret of the M0’s identity and even encourages customers to tweak the presupplied driver code if they like.

This is a lot different than, say, Freescale’s approach with the Time Processing Unit (TPU) found in many of its communications controllers. The TPU is essentially a black box that only Freescale can program. It gives the company a lot of design freedom, but we only get to push the buttons they provide us.

All four members of the LPC4300 family start out with the same collection of baseline I/O. Then the options start to pile up. The basic 4310 has all the features mentioned above; the 4320 adds one USB port; the 4330 adds Ethernet and a second USB port; and finally, the 4350 adds a color LCD controller. Along the way, on-chip SRAM and flash capacity increase and the package necessarily gets larger. Prices soar to maybe $6 or $7 in quantity.

All four chips can run as fast as 150 MHz, which ain’t bad for a dual-core chip priced in single digits. The main Cortex-M4 CPU has a floating-point unit and even some rudimentary DSP features. Remember when FPUs were a $300 option for your PC? Now they’re practically giving them away.

The LCD controller on the high-end 4350 device is pretty decent. It can handle resolutions up to 1024×768 and can do monochrome, grayscale, or 24-bit color. The Ethernet interface (found on the 4330 and ’50) handles 10/100 Mbps data rates with an MII and RMII interface (so MAC only). All four versions even have a PWM output and quadrature (QEI) input for accurate motor-control applications. All in all, not bad for cheap microcontrollers.

Because the M4 main processor and the M0 peripheral processor are binary compatible, you could move your I/O drivers from the M0 to the M4 and back again if you like. You might find, for example, that it’s easier to port old code to the main M4 processor at first, putting off the task of delegating the I/O drivers. Or, you might devise a fiendishly clever system for managing the I/O that runs on the M0 and you want to push the drivers back onto the M4. Whatever method suits you, NXP’s chips are happy to accommodate.

NXP’s engineers say they considered adding a hardwired I/O assist to the LPC4300 family, but they decided they liked the idea of a dedicated Cortex-M0 instead. They even considered using programmable logic to add I/O flexibility, as Actel (now Microsemi) and others have done. Again, the doctrine of separate-but-equal processors won out. NXP feels that its customers are more familiar with programming processors than with configuring FPGAs. It’s better (in their view) to use one set of tools for both processors than to have a second set of tools for I/O configuration and manipulation. I tend to agree; as long as the M0 is fast enough and cheap enough, it’s a simpler alternative to programmable logic or custom I/O engines.

So now we can all get into multicore programming at a budget price. At just $2 a pop, there’s little reason not to. 

Leave a Reply

featured blogs
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

AI/ML System Architecture Connectivity Solutions
Sponsored by Mouser Electronics and Samtec
In this episode of Chalk Talk, Amelia Dalton and Matthew Burns from Samtec investigate a variety of crucial design considerations for AI and ML designs, the role that AI chipsets play in the development of these systems, and why the right connectivity solution can make all the difference when it comes to your machine learning or artificial intelligence design.
Oct 23, 2023
24,031 views