feature article
Subscribe Now

AI Boldly Goes Behind the Beyond

I used to love the title sequence at the beginning of each episode of Star Trek: The Original Series starring William Shatner as Captain James Tiberius Kirk. I’m thinking of the part where the announcer waffled on about the Enterprise’s five-year mission “to boldly go behind the beyond, behind which no man has boldly gone behind, beyond, before” (or words to that effect). Well, it seems to be artificial intelligence’s turn to boldly go behind the beyond.

As an aside, writing the previous paragraph reminded me of The Boys: A Memoir of Hollywood and Family by brothers Ron Howard (The Andy Griffith Show, American Graffiti, Happy Days…) and Clint Howard (The Andy Griffith Show, Gentle Ben…). In addition to being one of the best autobiographical books I’ve read, I was surprised to discover that 7-year-old Clint played Balok in The Corbomite Maneuver, which was the tenth episode in season one of Star Trek.

As another aside (I simply cannot help myself), have you ever pondered the Fermi paradox, which is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the potentially high likelihood of its existence? As we discover in the science fiction novel The Forge of God by Greg Bear, one answer to Fermi’s puzzling poser is that electromagnetically noisy civilizations like ours might be snuffed out by the arrival of artificially intelligent self-replicating machines designed to destroy any potential threat to their (possibly long-dead) creators. It has to be acknowledged that having the Earth destroyed in The Forge of God is a bit of a bummer, but we get our own back in Anvil of Stars when… but no; I’ll let you discover what happens for yourself.

To be honest, when it comes to artificial intelligence (AI) in the context of space—excepting extraterrestrial encounters such as those discussed above (or, preferably, more human-friendly fraternizations of the ilk depicted in Batteries Not Included and The Flight of the Navigator)—I’ve only really thought about things like AI-powered autonomous space probes and suchlike. Thus far, however, I’ve never really thought about using AI as part of designing things like satellites and space probes.

Trust me; you don’t want to be the one who failed to fully verify a satellite’s retro encabulation system.

All this was to change recently when I got to chat with Ossi Saarela, who is Space Segment Manager at MathWorks. Prior to his current role, Ossi spent 18 years as a practicing aerospace engineer working on mega-cool programs like the International Space Station (ISS). Ossi’s specializations include spacecraft operations and spacecraft autonomy.

One of Ossi’s current focuses is the simulation of space systems. Another is the use of AI both to design and verify space systems and as an integral part of those systems. It’s important to get things right the first time with any system, but even more so with systems destined for space because (a) getting them there is horrendously expensive and (b) it’s close to impossible to fix them once they are in space (the Hubble Space Telescope being one of the few exceptions that proves the rule).

Lest I forget, before we plunge headfirst into the fray with gusto and abandon, a couple of useful links are as follows: Using MATLAB and Simulink for Space Systems and Machine Learning for Space Missions: A Game Changer for Vision-Based Sensing.

I’m afraid this is the point where things become a little recursive (“in order to understand recursion, you must first understand recursion,” as programmers are prone to proclaim).

Let’s start with the fact that you need a humongous amount of data to train an AI model, and not just any old data will do. It needs to be good data because bad data can leave you spending inordinate amounts of time trying to determine why your model isn’t working as expected. 

Rather than banging your head against the wall tweaking your AI model’s architecture and parameters, it has been shown that time spent improving the training data and testing thoroughly can often yield larger improvements in accuracy. But where are we to get this data? Since it can be difficult to obtain real-world data from systems deployed in space, simulation often provides a solution. The use of simulation to augment existing AI model training data has multiple benefits, including the fact that running computational simulation is much less costly than performing physical experiments. Also of interest is the fact that simulations provide access to internal states that might not be accessible in an experimental setup. Furthermore, in the case of simulation, engineers have full control over the environment and can simulate scenarios that are too difficult, too dangerous, or even impossible to create in the real world. 

This is where the recursion starts to kick in, because people are starting to use AI models to approximate the workings of complex systems. Suppose you wish to create control algorithms that will—in the fullness of time—interact with a physical system. Rather than use the physical system itself, the key to enabling rapid design iteration for your algorithms is to create a physics-based simulation model that gives you the necessary accuracy to recreate the physical system and environment with which your algorithms can interact.

But there’s an elephant in the room and a fly in the soup (I never metaphor I didn’t like). Historically, to achieve the necessary accuracy, engineers have created high-fidelity physics-based models from first-principles (i.e., from the ground up). But these models can take a long time to build and a long time to simulate. The problem is only exacerbated when large numbers of models representing different parts of the system are combined in a single simulation.

One solution is to simulate each high-fidelity model in isolation, and then use the captured input stimulus and output responses to train corresponding AI models. These reduced-order AI models are much less computationally expensive than their first-principles counterparts, thereby enabling the engineers to perform more exploration of the solution space (where no one can hear you scream). Of course, any physics-based models can always be used later in the process to validate the design determined using the AI model.

Alternatively, in some cases, it’s possible to use real-world data from a physical system to train the AI model, thereby completely bypassing the creation of a physics-based model. Of course, once you have AI models to represent each of your subsystems, you could use these as part of a simulation to generate the data to train an AI model of the entire system, which returns us to the part where I started to waffle about recursion. 

But wait, there’s more… My degree in Control Systems involved a core of math sufficient to make your eyes water, coupled with electronics, mechanics, and fluidics (hydraulics and pneumatics). It also involved a lot of algorithms (oh, so many algorithms). Increasingly, engineers use simulations as part of the process of designing and verifying their algorithms.

A big problem with creating control algorithms is to ensure they fully address the complex non-linearities inherent in many real-world systems. One solution is to use data (either measured or simulated) to train an AI control algorithm (model) that can predict unobserved states from observed states. This model can subsequently be employed to control the real-world system.

So, if we re-read all the above, in a crunchy nutshell: (a) we can use simulation to generate data to train AI models, (b) we can use the trained AI models to speed our simulations, and (c) we can create AI-based control algorithms that we train using data generated by AI-model-based simulations after which we can verify these algorithms using AI-model-based simulations. (I feel it would be recursive of me to return to the topic of recursion.)

I was going to talk about using AI for tasks like Rendezvous, Proximity Operations, and Docking (RPOD), alighting probes on asteroids and comets, landing rovers on the Moon and Mars, and… so much more, but I’m afraid that will have to wait for another day because (what I laughingly call) my mind seems to be stuck in a recursive loop. It’s like déjà vu all over again (did someone just say that?).

Leave a Reply

featured blogs
Mar 4, 2024
The current level of computing power is unprecedented. It has led to the development of advanced computational techniques that enable us to simulate larger systems and predict complex phenomena with greater accuracy. However, the simulation of turbomachinery systems still pos...
Mar 1, 2024
Explore standards development and functional safety requirements with Jyotika Athavale, IEEE senior member and Senior Director of Silicon Lifecycle Management.The post Q&A With Jyotika Athavale, IEEE Champion, on Advancing Standards Development Worldwide appeared first ...
Feb 28, 2024
Would it be better to ride the railways on people-powered rail bikes, or travel to the edge of space in a luxury lounge hoisted by a gigantic balloon?...

featured video

Tackling Challenges in 3DHI Microelectronics for Aerospace, Government, and Defense

Sponsored by Synopsys

Aerospace, Government, and Defense industry experts discuss the complexities of 3DHI for technological, manufacturing, & economic intricacies, as well as security, reliability, and safety challenges & solutions. Explore DARPA’s NGMM plan for the 3DHI R&D ecosystem.

Learn more about Synopsys Aerospace and Government Solutions

featured paper

Reduce 3D IC design complexity with early package assembly verification

Sponsored by Siemens Digital Industries Software

Uncover the unique challenges, along with the latest Calibre verification solutions, for 3D IC design in this new technical paper. As 2.5D and 3D ICs redefine the possibilities of semiconductor design, discover how Siemens is leading the way in verifying complex multi-dimensional systems, while shifting verification left to do so earlier in the design process.

Click here to read more

featured chalk talk

Stepper Motor Basics & Toshiba Motor Control Solutions
Sponsored by Mouser Electronics and Toshiba
Stepper motors offer a variety of benefits that can add value to many different kinds of electronic designs. In this episode of Chalk Talk, Amelia Dalton and Doug Day from Toshiba examine the different types of stepper motors, the solutions to drive these motors, and how the active gain control and ADMD of Toshiba’s motor control solutions can make all the difference in your next design.
Sep 29, 2023
20,051 views