feature article
Subscribe Now

NASA Recruits Microchip, SiFive, and RISC-V to Develop 12-Core Processor SoC for Autonomous Space Missions

NASA’s JPL (Jet Propulsion Lab) has selected Microchip to design and manufacture the multi-core High Performance Spaceflight Computer (HPSC) microprocessor SoC based on eight RISC-V X280 cores from SiFive with vector-processing instruction extensions organized into two clusters, with four additional RISC-V cores added for general-purpose computing. The project’s operational goal is to develop “flight computing technology that will provide at least 100 times the computational capacity compared to current spaceflight computers.” During a talk at the recent RISC-V Summit, Pete Fiacco, a member of the HPSC Leadership Team and JPL Consultant, explained the overall HPSC program goals.

Despite the name, the HPSC is not strictly a processor SoC for space. It’s designed to be a reliable computer for a variety of applications on the Earth – such as defense, commercial aviation, industrial robotics, and medical equipment – as well as being a good candidate for use in government and commercial spacecraft. Three characteristics that the HPSC needs beyond computing capability are fault tolerance, radiation tolerance, and overall platform security. The project will result in the development of the HPSC chip, boards, a software stack, and reference designs with initial availability in 2024 and space-qualified hardware available in 2025. Fiacco said that everything NASA JPL does in the future will be based on the HPSC.

NASA JPL set the goals for the HPSC based on its mission requirements to put autonomy into future spacecraft. Simply put, the tasks associated with autonomy are sensing, perceiving, deciding, and actuating. Sensing involves remote imaging using multi-spectral sensors and image processing. Perception instills meaning into the sensed data using additional image processing. Decision making includes mission planning that incorporates the vehicle’s current and future orientation. Actuation involves orbital and surface maneuvering and experiment activation and management.

Correlating these tasks with NASA’s overall objectives for its missions, Fiacco explained that the HPSC is designed to allow space-bound equipment to go, land, live, and explore extraterrestrial environments. Spacecraft also need to report back to earth, which is why Fiacco also included communications in all four major tasks. All of this will require a huge leap in computing power. Simulations suggest that the HPSC increases computing performance by 1000X compared to the processors currently flying in space, and Fiacco expects that number to improve with further optimization of the HPSC’s software stack.

It’s hard to describe how much of an upgrade the HPSC represents for NASA JPL’s computing platform without contrasting the new machine with computers currently operating off planet. For example, the essentially similar, nuclear-powered Curiosity and Perseverance rovers currently trundling around Mars with semi-autonomy are based on RAD750 microprocessors from BAE Systems. (See “Baby You Can Drive My Rover.”) The RAD750 employs the 32-bit PowerPC 750 architecture and is manufactured with a radiation-tolerant semiconductor process. This chip has a maximum clock rate of 200 MHz and represents the best of computer architecture circa 2001. Reportedly, more than 150 RAD750 processors have been launched into space. Remember, NASA likes to fly hardware that’s flown before. One of the latest space artifacts to carry a RAD750 into space is the James Webb Space Telescope (JWST), which is now imaging the universe in the infrared spectrum and is collecting massive amounts of new astronomical data while sitting in a Lagrange orbit one million miles from Earth. (That’s four times greater than the moon’s orbit.) The JWST’s RAD750 processor lopes along at 118 MHz.

Our other great space observatory, the solar-powered Hubble Space Telescope (HST), sports an even older processor. The HST payload computer is an 18-bit NASA Standard Spacecraft Computer-1 (NSSC-1) system built in the 1980s but designed even earlier. This payload computer controls and coordinates data streams from the HST’s various scientific instruments and monitors their condition. (See “Losing Hubble – Saving Hubble.”)

The original NSSC-1 computer was developed by the NASA Goddard Space Flight Center and Westinghouse Electric in the early 1970s. The design is so old that it’s not based on a microprocessor. The initial version of this computer incorporated 1700 DTL flat-pack ICs from Fairchild Semiconductor and used magnetic core memory. Long before the HST launched in 1990, the NSSC-1 processor design was “upgraded” to fit into some very early MSI TTL gate arrays, each incorporating approximately 130 gates of logic.

I’m not an expert in space-based computing, so I asked an expert for his opinion. The person I know who is most versed in space-based computing with microprocessors and FPGAs is my friend Adam Taylor, the founder and president of Adiuvo Engineering in the UK. I asked Taylor what he thought of the HPSC and he wrote:

“The HPSC is actually quite exciting for me. We do a lot in space and computation is a challenge. Many of the current computing platforms are based on older architectures like the SPARC (LEON series) or Power PC (RAD750 / RAD5545). Not only do these [processors] have less computing power, they also have ecosystems which are limited. Limited ecosystems mean longer development times (less reuse, more “fighting” with the tools as they are generally less polished) and they also limit attraction of new talent, people who want to work with modern frameworks, processors, and tools. This also limits the pool of experienced talent (which is an increasing issue like it is in many industries).

“The creation of a high-performance multicore processor based around RISC-V will open up a wide ecosystem of tools and frameworks while also providing attraction to new talent and widening the pool of experienced talent. The processors themselves look very interesting as they are designed with high performance in mind, so they have SIMD / Vector processing and AI (urgh such an overstated buzz word). It also appears they have considered power management well, which is critical for different applications, especially in space.

“It is interesting that as an FPGA design company (primarily), we have designed in several MicroChip SAM71 RT and RH [radiation tolerant and radiation hardened] microcontrollers recently, which really provide some great capabilities where processing demands are low. I see HPSC as being very complementary to this range of devices, leaving the ultrahigh performance / very hard real time applications to be implemented in FPGA.  Ultimately HPSC gives engineers another tool to choose from, and it is designed to prevent the all-too-common, start-from-scratch approach, which engineers love. Sadly, that approach always increases costs and technical risk on these projects, and we have enough of that already.”

One final note: During my research for this article, I discovered that NASA’s HPSC has not always been based on the RISC-V architecture. A presentation made at the Radiation Hardened Electronics Technology (RHET) Conference in 2018 by Wesley Powell, Assistant Chief for Technology at NASA Goddard Space Flight Center’s Electrical Engineering Division, includes a block diagram of the HPSC, which shows an earlier conceptual design based on eight Arm Cortex-A53 microprocessor cores with NEON SIMD vector engines and floating-point units. Powell continues to be the Principal Technologist on the HPSC program. At some point in the HPSC’s evolution over the past four years, at least by late 2020 when NASA published a Small Business Innovation Research (SBIR) project Phase I solicitation for the HPSC, the Arm processor cores had been replaced by a requirement for RISC-V processor cores. That change was formally cast in stone last September with the announcement of the project awards to Microchip and SiFive. A sign of the times, perhaps?

Leave a Reply

featured blogs
Dec 1, 2023
Why is Design for Testability (DFT) crucial for VLSI (Very Large Scale Integration) design? Keeping testability in mind when developing a chip makes it simpler to find structural flaws in the chip and make necessary design corrections before the product is shipped to users. T...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

How IO-Link® is Enabling Smart Factory Digitization -- Analog Devices and Mouser Electronics
Safety, flexibility and sustainability are cornerstone to today’s smart factories. In this episode of Chalk Talk, Amelia Dalton and Shasta Thomas from Analog Devices discuss how Analog Device’s IO-Link is helping usher in a new era of smart factory automation. They take a closer look at the benefits that IO-Link can bring to an industrial factory environment, the biggest issues facing IO-Link sensor and master designs and how Analog Devices ??can help you with your next industrial design.
Feb 2, 2023