feature article
Subscribe Now

Securing Artificial Intelligence Before It Secures Us!

Since I spend an inordinate and unfortunate amount of time worrying about the possibility of a forthcoming artificial intelligence (AI) apocalypse, I was delighted to hear that the folks at ETSI have plunged into the fray with regard to establishing the world’s first standardization initiative dedicated toward securing AI. We will return to ETSI’s initiative shortly, but first…

To be honest, things are now happening so fast with regard to AI that it’s starting to make my head spin (see also What the FAQ are AI, ANNs, ML, DL, and DNNs?). As I’ve mentioned before, AI has been long in the coming. Way back in the 1840s, Ada Lovelace, who was assisting Charles Babbage on his quest to build a mechanical computer called the Analytical Engine, jotted down some thoughts about the possibility of computers one day using numbers as symbols to represent other things like musical notes. She even went so far as to speculate of machines:

[…] having the ability to compose elaborate and scientific pieces of music of any degree of complexity or extent.

That sort of sounds like AI to me. In 1950, a little over 100 years after Ada penned her musings, English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist, Alan Mathison Turing wrote a seminal paper, Computing Machinery and Intelligence, in which he considered the question, “Can Computers Think?” I fear Alan wasn’t overly enthused by his own conclusions because during a lecture he gave the following year he noted:

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.

Of course, all of this seemed to be extremely futuristic back in those days of yore. Thinking machines were widely considered to be the stuff of science fiction. Having said this, the founding event for the field of AI as we know and love it today occurred just a few years later in 1956 at a gathering known informally as the Dartmouth workshop.

Following this workshop, nothing much seemed to be going on AI-wise as far as the unwashed masses were concerned. Behind the scenes, however, in academia and in research and development teams around the world, a huge amount of activity, time, and resources were being expended building the foundations upon which modern AI stands.

Around 10 years ago, AI exited its ivory tower and entered the real world. Since that time, the use of AI has exploded into an incredible diversity of deployments, from handwriting recognition to speech recognition to machine vision to robotics to predictive maintenance to the creation of deep fake audios and videos to… name but a few.

On the One Hand

On the one hand, I’m tremendously enthused by AI. Take the JOYCE Project, for example, which is the brainchild of those clever guys and gals at Immervision, and which is going to be the first humanoid robot to be developed as a collaboration by the computer vision community with the goal of equipping machines with human-like perception. As I wrote in an earlier column:

One very clever aspect of all this is the way in which JOYCE employs the latest and greatest in data-in-picture technology, in which meta-information is embedded directly into the pixels forming the images. By means of this technology, each of JOYCE’s video frames can be enriched with data from a wide array of sensors providing contextual information that can be used by AI, neural networks, computer vision, and simultaneous localization and mapping (SLAM) algorithms to help increase her visual perception, insight, and discernment.

When I wrote those words, the sort of sensor data I was envisaging embedding was along the lines of audio (from microphones), motion and orientation (from accelerometers, gyroscopes, and magnetometers), environmental (from temperature, humidity, and barometric pressure sensors), and so forth. Well, as of this week, we can also add the sense of smell to this list, because SmartNanotubes Technologies has just launched a nifty new nanotube-based olfactory sensor. This machine olfaction capability means next-generation robots will be able to detect, recognize, and respond to smells (see The Electronic Nose Knows!).

Speaking of next-generation robots, do you recall my column Do Robot Dogs Dream of Cyborg Cats? in which we were introduced to the concept of using decentralized artificial intelligence (AI) to create a pack of robot dogs? Shortly after this, I ran across some Uber-Cute Robot Puppies. Also, as I discussed in Do You Love Me? I’m sure you’ve seen this video of humanoid and canine robots from Boston Dynamics dancing like Olympic champions.

 

If we are lucky, our future will be bright indeed (which reminds me of the 1986 song The Future’s So Bright, I Gotta Wear Shades by Timbuk 3). In our case, of course, the “shades” will embody the combination of AI and augmented reality (AR), which — I firmly believe — will change the way in which we interact with the world, our systems, and each other (see also What the FAQ are VR, MR, AR, DR, AV, and HR?).

On the Other Hand

On the other hand, I have to admit that the current speed of development is a tad disconcerting. The premise of the 1984 movie The Terminator is that an artificial intelligence defense network called Skynet becomes self-aware and initiates a nuclear holocaust (one of the early chase scenes in Terminator 2: Judgment Day is the main reason I want to learn to ride a motorbike — I’d feel really stupid being pursued by a homicidal robot, finding a motorbike with the keys in the ignition, but having to keep on running because I didn’t know how to ride it).

About 10 years ago I read Robopocalypse by Daniel H. Wilson, who holds a Ph.D. in Robotics from Carnegie Mellon University in Pittsburgh. Now I wish I hadn’t. It was a great book, but it painted a very scary picture. More recently, I watched the 2020 American science fiction crime drama television series Next, which didn’t make me feel any happier.

It’s not all that long ago that Stephen Hawking, Elon Musk, and Bill Gates warned us about the existential threat of artificial intelligence. As I discussed in my column The Artificial Intelligence Apocalypse, scientists at MIT have managed to create the world’s first psychopathic AI, which they named Norman after the lead character in Alfred Hitchcock’s movie Psycho. When you watch Boston Dynamic’s Atlas robot performing gymnastics as seen in this video, it’s hard to prevent the thought of being pursued by such a machine that’s armed with an AR-15 and equipped with a Norman-like AI. 

 

In the 1987 novel Great Sky River by Gregory Benford, which is set tens of thousands of years in the future, humans have spread through the galaxy. As they approach the galactic center, they encounter mechanoid civilizations. The precursors to these mechanoids must have been created by one or more biological species at some time in the dim and distant past, but that was so long ago that the mechanoids have no recollection of it and they now continue to evolve “new and improved” versions of themselves.

Do you remember the American science fiction comedy-drama The Orville? Isaac, who is the science and engineering officer, is a member of an artificial, non-biological race that regards biological lifeforms as being inferior. In the Identity sub-story, we get to visit Isaac’s home world, Kaylon 1. It doesn’t take long before the precocious kid (there’s always one) blunders his way into the catacombs beneath the city. I can still remember the shiver that ran up and down my spine when we got to see mountain after mountain of skeletal humanoid remains that were once the planet’s organic population. It turns out that the robots turned against their creators, who — sadly — hadn’t heard of Isaac Asimov’s Three Laws of Robotics.

ETSI SAI ISG

And so we return to the folks at ETSI, who never met a TLA (three letter acronym) they didn’t like. ETSI, which is an independent, not-for-profit, standardization organization in the field of information and communications, used to be a FLA (four letter acronym) for “European Telecommunications Standards Institute.” Over time, however, it evolved into a more global role, so “ETSI” now stands on its own.

I was just chatting with Alex Leadbeater, who is Chair of the Cyber Security Technical Committee at ETSI. Alex is also chair of ETSI’s Securing Artificial Intelligence Industry Specification Group, or SAI ISG for short. This group doesn’t concern itself with what we do with AI or with AI apps targeted at specific purposes. Instead, the focus of the group is “How do we trust AI?” and “how do we know what our AIs are actually doing?”

As Alex says, the problem is that many of the traditional security paradigms don’t lend themselves to AI, whose richer parallel processes and layers present different attack surfaces to nefarious players, some of whom may themselves be AIs, because AIs can be used for both defensive and aggressive purposes (the mind boggles).

In a nutshell, the ETSI SAI ISG recently released its first report, ETSI GR SAI 004, which gives an overview of the problem statement regarding the securing of AI. This report describes the issues involved in securing AI-based systems and the challenges relating to confidentiality, integrity, and availability at each stage of the machine learning lifecycle. It also points out some of the broader challenges of AI systems including bias, ethics, and the ability to be explained. Furthermore, a number of different attack vectors are outlined, as well as several cases of real-world use and attacks.

It’s important to note that this report is just Phase 1 in a series. For example, the designers of traditional embedded systems have the concept of a Root of Trust (RoT), which is a source that can always be trusted within a system, and which — amongst other things — ensures the system cannot be compromised while in the process of being powered-up. So, one of the topics to be targeted by the ETSI SAI ISG in Phase 2 is the creation of an AIRoT.

Personally, the challenges involved in securing traditional computing and communications systems make my head hurt. I don’t even want to think about the trials and tribulations involved in securing AI systems. Based on this, I’m delighted to hear that the stalwart chaps and chapesses in the ETSI SAI ISG are undertaking this daunting duty on our behalf. How about you? Do you have any thoughts you’d care to share on anything we’ve discussed here?

Leave a Reply

featured blogs
Apr 26, 2024
Biological-inspired developments result in LEDs that are 55% brighter, but 55% brighter than what?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Altera® FPGAs and SoCs with FPGA AI Suite and OpenVINO™ Toolkit Drive Embedded/Edge AI/Machine Learning Applications

Sponsored by Intel

Describes the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Click here to read more

featured chalk talk

Silence of the Amps: µModule Regulators
In this episode of Chalk Talk, Amelia Dalton and Younes Salami from Analog Devices explore the benefits of Analog Devices’ silent switcher technology. They also examine the pros and cons of switch mode power supplies and how you can utilize silent switcher µModule regulators in your next design.
Dec 13, 2023
19,332 views