feature article
Subscribe Now

AI Boldly Goes Behind the Beyond

I used to love the title sequence at the beginning of each episode of Star Trek: The Original Series starring William Shatner as Captain James Tiberius Kirk. I’m thinking of the part where the announcer waffled on about the Enterprise’s five-year mission “to boldly go behind the beyond, behind which no man has boldly gone behind, beyond, before” (or words to that effect). Well, it seems to be artificial intelligence’s turn to boldly go behind the beyond.

As an aside, writing the previous paragraph reminded me of The Boys: A Memoir of Hollywood and Family by brothers Ron Howard (The Andy Griffith Show, American Graffiti, Happy Days…) and Clint Howard (The Andy Griffith Show, Gentle Ben…). In addition to being one of the best autobiographical books I’ve read, I was surprised to discover that 7-year-old Clint played Balok in The Corbomite Maneuver, which was the tenth episode in season one of Star Trek.

As another aside (I simply cannot help myself), have you ever pondered the Fermi paradox, which is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the potentially high likelihood of its existence? As we discover in the science fiction novel The Forge of God by Greg Bear, one answer to Fermi’s puzzling poser is that electromagnetically noisy civilizations like ours might be snuffed out by the arrival of artificially intelligent self-replicating machines designed to destroy any potential threat to their (possibly long-dead) creators. It has to be acknowledged that having the Earth destroyed in The Forge of God is a bit of a bummer, but we get our own back in Anvil of Stars when… but no; I’ll let you discover what happens for yourself.

To be honest, when it comes to artificial intelligence (AI) in the context of space—excepting extraterrestrial encounters such as those discussed above (or, preferably, more human-friendly fraternizations of the ilk depicted in Batteries Not Included and The Flight of the Navigator)—I’ve only really thought about things like AI-powered autonomous space probes and suchlike. Thus far, however, I’ve never really thought about using AI as part of designing things like satellites and space probes.

Trust me; you don’t want to be the one who failed to fully verify a satellite’s retro encabulation system.

All this was to change recently when I got to chat with Ossi Saarela, who is Space Segment Manager at MathWorks. Prior to his current role, Ossi spent 18 years as a practicing aerospace engineer working on mega-cool programs like the International Space Station (ISS). Ossi’s specializations include spacecraft operations and spacecraft autonomy.

One of Ossi’s current focuses is the simulation of space systems. Another is the use of AI both to design and verify space systems and as an integral part of those systems. It’s important to get things right the first time with any system, but even more so with systems destined for space because (a) getting them there is horrendously expensive and (b) it’s close to impossible to fix them once they are in space (the Hubble Space Telescope being one of the few exceptions that proves the rule).

Lest I forget, before we plunge headfirst into the fray with gusto and abandon, a couple of useful links are as follows: Using MATLAB and Simulink for Space Systems and Machine Learning for Space Missions: A Game Changer for Vision-Based Sensing.

I’m afraid this is the point where things become a little recursive (“in order to understand recursion, you must first understand recursion,” as programmers are prone to proclaim).

Let’s start with the fact that you need a humongous amount of data to train an AI model, and not just any old data will do. It needs to be good data because bad data can leave you spending inordinate amounts of time trying to determine why your model isn’t working as expected. 

Rather than banging your head against the wall tweaking your AI model’s architecture and parameters, it has been shown that time spent improving the training data and testing thoroughly can often yield larger improvements in accuracy. But where are we to get this data? Since it can be difficult to obtain real-world data from systems deployed in space, simulation often provides a solution. The use of simulation to augment existing AI model training data has multiple benefits, including the fact that running computational simulation is much less costly than performing physical experiments. Also of interest is the fact that simulations provide access to internal states that might not be accessible in an experimental setup. Furthermore, in the case of simulation, engineers have full control over the environment and can simulate scenarios that are too difficult, too dangerous, or even impossible to create in the real world. 

This is where the recursion starts to kick in, because people are starting to use AI models to approximate the workings of complex systems. Suppose you wish to create control algorithms that will—in the fullness of time—interact with a physical system. Rather than use the physical system itself, the key to enabling rapid design iteration for your algorithms is to create a physics-based simulation model that gives you the necessary accuracy to recreate the physical system and environment with which your algorithms can interact.

But there’s an elephant in the room and a fly in the soup (I never metaphor I didn’t like). Historically, to achieve the necessary accuracy, engineers have created high-fidelity physics-based models from first-principles (i.e., from the ground up). But these models can take a long time to build and a long time to simulate. The problem is only exacerbated when large numbers of models representing different parts of the system are combined in a single simulation.

One solution is to simulate each high-fidelity model in isolation, and then use the captured input stimulus and output responses to train corresponding AI models. These reduced-order AI models are much less computationally expensive than their first-principles counterparts, thereby enabling the engineers to perform more exploration of the solution space (where no one can hear you scream). Of course, any physics-based models can always be used later in the process to validate the design determined using the AI model.

Alternatively, in some cases, it’s possible to use real-world data from a physical system to train the AI model, thereby completely bypassing the creation of a physics-based model. Of course, once you have AI models to represent each of your subsystems, you could use these as part of a simulation to generate the data to train an AI model of the entire system, which returns us to the part where I started to waffle about recursion. 

But wait, there’s more… My degree in Control Systems involved a core of math sufficient to make your eyes water, coupled with electronics, mechanics, and fluidics (hydraulics and pneumatics). It also involved a lot of algorithms (oh, so many algorithms). Increasingly, engineers use simulations as part of the process of designing and verifying their algorithms.

A big problem with creating control algorithms is to ensure they fully address the complex non-linearities inherent in many real-world systems. One solution is to use data (either measured or simulated) to train an AI control algorithm (model) that can predict unobserved states from observed states. This model can subsequently be employed to control the real-world system.

So, if we re-read all the above, in a crunchy nutshell: (a) we can use simulation to generate data to train AI models, (b) we can use the trained AI models to speed our simulations, and (c) we can create AI-based control algorithms that we train using data generated by AI-model-based simulations after which we can verify these algorithms using AI-model-based simulations. (I feel it would be recursive of me to return to the topic of recursion.)

I was going to talk about using AI for tasks like Rendezvous, Proximity Operations, and Docking (RPOD), alighting probes on asteroids and comets, landing rovers on the Moon and Mars, and… so much more, but I’m afraid that will have to wait for another day because (what I laughingly call) my mind seems to be stuck in a recursive loop. It’s like déjà vu all over again (did someone just say that?).

Leave a Reply

featured blogs
Apr 26, 2024
Biological-inspired developments result in LEDs that are 55% brighter, but 55% brighter than what?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

How Capacitive Absolute Encoders Enable Precise Motion Control
Encoders are a great way to provide motion feedback and capture vital rotary motion information. In this episode of Chalk Talk, Amelia Dalton and Jeff Smoot from CUI Devices investigate the benefits and drawbacks of different encoder solutions. They also explore the unique system advantages of absolute encoders and how you can get started using a CUI Devices absolute encoder in your next design.
Apr 1, 2024
4,097 views