feature article
Subscribe Now

NASA Recruits Microchip, SiFive, and RISC-V to Develop 12-Core Processor SoC for Autonomous Space Missions

NASA’s JPL (Jet Propulsion Lab) has selected Microchip to design and manufacture the multi-core High Performance Spaceflight Computer (HPSC) microprocessor SoC based on eight RISC-V X280 cores from SiFive with vector-processing instruction extensions organized into two clusters, with four additional RISC-V cores added for general-purpose computing. The project’s operational goal is to develop “flight computing technology that will provide at least 100 times the computational capacity compared to current spaceflight computers.” During a talk at the recent RISC-V Summit, Pete Fiacco, a member of the HPSC Leadership Team and JPL Consultant, explained the overall HPSC program goals.

Despite the name, the HPSC is not strictly a processor SoC for space. It’s designed to be a reliable computer for a variety of applications on the Earth – such as defense, commercial aviation, industrial robotics, and medical equipment – as well as being a good candidate for use in government and commercial spacecraft. Three characteristics that the HPSC needs beyond computing capability are fault tolerance, radiation tolerance, and overall platform security. The project will result in the development of the HPSC chip, boards, a software stack, and reference designs with initial availability in 2024 and space-qualified hardware available in 2025. Fiacco said that everything NASA JPL does in the future will be based on the HPSC.

NASA JPL set the goals for the HPSC based on its mission requirements to put autonomy into future spacecraft. Simply put, the tasks associated with autonomy are sensing, perceiving, deciding, and actuating. Sensing involves remote imaging using multi-spectral sensors and image processing. Perception instills meaning into the sensed data using additional image processing. Decision making includes mission planning that incorporates the vehicle’s current and future orientation. Actuation involves orbital and surface maneuvering and experiment activation and management.

Correlating these tasks with NASA’s overall objectives for its missions, Fiacco explained that the HPSC is designed to allow space-bound equipment to go, land, live, and explore extraterrestrial environments. Spacecraft also need to report back to earth, which is why Fiacco also included communications in all four major tasks. All of this will require a huge leap in computing power. Simulations suggest that the HPSC increases computing performance by 1000X compared to the processors currently flying in space, and Fiacco expects that number to improve with further optimization of the HPSC’s software stack.

It’s hard to describe how much of an upgrade the HPSC represents for NASA JPL’s computing platform without contrasting the new machine with computers currently operating off planet. For example, the essentially similar, nuclear-powered Curiosity and Perseverance rovers currently trundling around Mars with semi-autonomy are based on RAD750 microprocessors from BAE Systems. (See “Baby You Can Drive My Rover.”) The RAD750 employs the 32-bit PowerPC 750 architecture and is manufactured with a radiation-tolerant semiconductor process. This chip has a maximum clock rate of 200 MHz and represents the best of computer architecture circa 2001. Reportedly, more than 150 RAD750 processors have been launched into space. Remember, NASA likes to fly hardware that’s flown before. One of the latest space artifacts to carry a RAD750 into space is the James Webb Space Telescope (JWST), which is now imaging the universe in the infrared spectrum and is collecting massive amounts of new astronomical data while sitting in a Lagrange orbit one million miles from Earth. (That’s four times greater than the moon’s orbit.) The JWST’s RAD750 processor lopes along at 118 MHz.

Our other great space observatory, the solar-powered Hubble Space Telescope (HST), sports an even older processor. The HST payload computer is an 18-bit NASA Standard Spacecraft Computer-1 (NSSC-1) system built in the 1980s but designed even earlier. This payload computer controls and coordinates data streams from the HST’s various scientific instruments and monitors their condition. (See “Losing Hubble – Saving Hubble.”)

The original NSSC-1 computer was developed by the NASA Goddard Space Flight Center and Westinghouse Electric in the early 1970s. The design is so old that it’s not based on a microprocessor. The initial version of this computer incorporated 1700 DTL flat-pack ICs from Fairchild Semiconductor and used magnetic core memory. Long before the HST launched in 1990, the NSSC-1 processor design was “upgraded” to fit into some very early MSI TTL gate arrays, each incorporating approximately 130 gates of logic.

I’m not an expert in space-based computing, so I asked an expert for his opinion. The person I know who is most versed in space-based computing with microprocessors and FPGAs is my friend Adam Taylor, the founder and president of Adiuvo Engineering in the UK. I asked Taylor what he thought of the HPSC and he wrote:

“The HPSC is actually quite exciting for me. We do a lot in space and computation is a challenge. Many of the current computing platforms are based on older architectures like the SPARC (LEON series) or Power PC (RAD750 / RAD5545). Not only do these [processors] have less computing power, they also have ecosystems which are limited. Limited ecosystems mean longer development times (less reuse, more “fighting” with the tools as they are generally less polished) and they also limit attraction of new talent, people who want to work with modern frameworks, processors, and tools. This also limits the pool of experienced talent (which is an increasing issue like it is in many industries).

“The creation of a high-performance multicore processor based around RISC-V will open up a wide ecosystem of tools and frameworks while also providing attraction to new talent and widening the pool of experienced talent. The processors themselves look very interesting as they are designed with high performance in mind, so they have SIMD / Vector processing and AI (urgh such an overstated buzz word). It also appears they have considered power management well, which is critical for different applications, especially in space.

“It is interesting that as an FPGA design company (primarily), we have designed in several MicroChip SAM71 RT and RH [radiation tolerant and radiation hardened] microcontrollers recently, which really provide some great capabilities where processing demands are low. I see HPSC as being very complementary to this range of devices, leaving the ultrahigh performance / very hard real time applications to be implemented in FPGA.  Ultimately HPSC gives engineers another tool to choose from, and it is designed to prevent the all-too-common, start-from-scratch approach, which engineers love. Sadly, that approach always increases costs and technical risk on these projects, and we have enough of that already.”

One final note: During my research for this article, I discovered that NASA’s HPSC has not always been based on the RISC-V architecture. A presentation made at the Radiation Hardened Electronics Technology (RHET) Conference in 2018 by Wesley Powell, Assistant Chief for Technology at NASA Goddard Space Flight Center’s Electrical Engineering Division, includes a block diagram of the HPSC, which shows an earlier conceptual design based on eight Arm Cortex-A53 microprocessor cores with NEON SIMD vector engines and floating-point units. Powell continues to be the Principal Technologist on the HPSC program. At some point in the HPSC’s evolution over the past four years, at least by late 2020 when NASA published a Small Business Innovation Research (SBIR) project Phase I solicitation for the HPSC, the Arm processor cores had been replaced by a requirement for RISC-V processor cores. That change was formally cast in stone last September with the announcement of the project awards to Microchip and SiFive. A sign of the times, perhaps?

Leave a Reply

featured blogs
Apr 16, 2024
Learn what IR Drop is, explore the chip design tools and techniques involved in power network analysis, and see how it accelerates the IC design flow.The post Leveraging Early Power Network Analysis to Accelerate Chip Design appeared first on Chip Design....
Apr 16, 2024
In today's semiconductor era, every minute, you always look for the opportunity to enhance your skills and learning growth and want to keep up to date with the technology. This could mean you would also like to get hold of the small concepts behind the complex chip desig...
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Introduction to the i.MX 93 Applications Processor Family
Robust security, insured product longevity, and low power consumption are critical design considerations of edge computing applications. In this episode of Chalk Talk, Amelia Dalton chats with Srikanth Jagannathan from NXP about the benefits of the i.MX 93 application processor family from NXP can bring to your next edge computing application. They investigate the details of the edgelock secure enclave, the energy flex architecture and arm Cortex-A55 core of this solution, and how they can help you launch your next edge computing design.
Oct 23, 2023
23,206 views