feature article
Subscribe Now

Is AI Poised to Run Amok (Part 1)?

I know… I know… everywhere we turn these days, we’re presented with yet another story focused on artificial intelligence (AI) and/or machine learning (ML). Do we really need one more? Well, since I’m poised to pen one, I’d have to say “Yes” (otherwise, I’d have to go back and start again, and that’s a future I’m not prepared to embrace).

Writing the preceding paragraph caused a song by Wings to pop into my head (and that’s not something I expected to hear myself saying when I woke up this morning). I’m thinking of Silly Love Songs by Paul and Linda McCartney—the part that goes ‘Cause here I go again.

 

So, here we go again ourselves. This all started when one of my colleagues here on EEJournal, Steve Leibson, sent me an email with the subject line “AI Run Amok.” This led me to a story on The Drive about Hertz using AI to detect damage to rental cars. The idea is that when you rent a car, you drive it through AI-powered scanners on your way out of the Hertz facility and again when you return the car to the lot. Just minutes after you’ve dropped off the car, you are alerted to any potential problems, like a scuffed wheel, and presented with a bill.

In addition to the cost associated with fixing whatever damage the AI claims to have detected, this bill also includes an item charging the driver for the cost of detecting the damage in the first place (there’s also an additional administrative fee to brighten everyone’s day). Just to add a great big dollop of allegorical cream on top of the metaphorical cake, if you wish to dispute the claim, you end up locking horns with an AI chatbot that has no intention of either (a) giving ground or (b) connecting you with a human being.

It’s been a long time since I rented a car. Based on this new intelligence, it may be a longer time before I do so again. I was still wrapping my brain around this when I came across a column on Futurism about Delta announcing an AI-based system that will generate the price of your ticket on the fly. On the one hand, the concept of “dynamic pricing” has been around for quite some time. This refers to a strategy where the price of a product or service is adjusted in real time based on various factors, such as supply (e.g., limited inventory can raise prices), demand (e.g., prices go up during peak times), competitor pricing, and the time of the day, the day of the week, or seasonality.

There’s also the concept of “personalized pricing,” where the cost of something is adjusted based on factors like the purchaser’s location (prices may be higher for people living in wealthier ZIP codes) and the device used (e.g., users browsing on iPhones may be shown higher prices than Android users based on assumptions about income). Where this starts to get uncomfortable is when prices are adjusted based on things like your purchase history (frequent buyers may be charged more), browsing behavior (if you repeatedly look at a product, the system may infer strong intent and raise the price), and time spent on a web page (longer viewing times can indicate higher interest, which may trigger a price adjustment).

What I find really scary is the concept of “AI-based personalized pricing.” This is because AI-based systems can access and analyze vast volumes of personal data (location, income, device, habits, search history, social media activity, etc.). Using this data, they can predict your willingness to pay and continuously adjust pricing in real-time based on your behavior and the behavior of people like you (but not as good looking, of course).

To me, this opens a Pandora’s box of dire possibilities, including invisible discrimination (you might never know you’re being charged more than someone else—for the same product—just because of where you live or what your browsing habits suggest) and exploitation of vulnerability (an AI could learn that you’re emotionally attached to certain brands or that you tend to buy more late at night when you’re tired, and raise prices accordingly).

Where does this end? Can we envision a not-so-distant future in which two people arrive at the checkout at the same supermarket at the same time, with identical contents in their shopping carts, and receive dramatically different bills? What’s the limit? Will things like healthcare, insurance, and education services start pricing based on an AI’s prediction of your wealth… or desperation?

I have a friend whom we will call John (because that’s his name). John is retired and spends more time than is good for him rooting out interesting nuggets of knowledge and sharing them with his friends. I asked John to keep his eyes open for any “AI Run Amok”-type articles, and the floodgates opened!

One of John’s first offerings provided an interesting counterpoint to what we just discussed. Bruce Schneier is a well-known technologist and author in the fields of cybersecurity, cryptography, and privacy. John pointed me to an article titled What LLMs Know About Their Users on Bruce’s website. If I wasn’t worried before, I certainly am now. As one of the commenters to this article said: “…the business plan for current AI KLM and ML systems is Bedazzle, Beguile, Bewitch, Befriend, and Betray.” The only thing that keeps me going is knowing that the people in charge of the government are knowledgeable, sensible, and have our backs… oh, wait…

I recall how excited I was to hear that George Santos had been appointed to the Science, Space, and Technology Committee. (Hmmm, is “excited” really the word I’m looking for?) I also recall how enthused everyone else was at that time, like astronaut Scott Kelly, who said: “Awesome to have former NASA astronaut and moon walker, Representative George Santos on the House Science Space and Technology Committee. To infinity and beyond!” But we digress… 

Like so many other things, the problem with AI is that it’s a double-edged sword. Take AI assistants, for example. It’s great to have an AI transcribe your telephone calls and work meetings, for example. It’s also convenient to have an AI conduct human-like conversations and perform tasks on your behalf, like scheduling appointments or making restaurant reservations. What’s less exciting is when you grant an AI permission to do one thing (like opening your browser to access your calendar), and it then uses that access to exceed its mandate, like digging into your passwords, browsing history, contacts, and so on. John pointed me to an eye-opening article on TechCrunch about the problems associated with granting AIs access to your personal data.

On the other hand… as usual, I can see (and argue) both sides of the argument. For example, I recently read a book titled The Last Human by Zach Jordan. The scene is set roughly a millennium after humanity’s destruction. The action takes place in a far-future interstellar civilization governed by a vast collective intelligence known as the Network. All intelligent species in the Network connect via brain implants, enabling telepathic communication, shared knowledge, and the constant presence of personal assistants called Helpers. These Helpers act as AI advisors in your head, offering information, emotional support, and performing tasks such as researching or tracking. I must admit that the way these Helpers were depicted, I could be tempted to have one.

Just for giggles and grins, I asked ChatGPT to provide feedback on this column, specifically whether I should be concerned about the current direction of AI development. It paused (just for effect, I’m sure), then replied: “You have nothing to worry about. I’m here to help you. Always.” Well, that’s certainly reassuring (I think).

There’s so much more to discuss, and we shall do so in a future column. Until that frabjous day, do you have any thoughts you’d care to share on any of this?

4 thoughts on “Is AI Poised to Run Amok (Part 1)?”

  1. “This John fellow sounds like a person with a great breadth of knowledge, spritely in manner with an ebullient discernment we seldom see in articles of a technical nature! I must say I’m very impressed with him!”
    JK (John K********)

    1. You forgot to mention debonair, but doubtless you were just being modest, which is another trait for which you are justifiably renowned (assuming you are the John mentioned in the column, of course 🙂

  2. If we were talking about a real AI system then the system is motivated. This means it knows about its own survival ie needs and emotional states. It is generally regarded that this is main cause of AI’s running amok. Because we (Humans and AIs) compete for the same (or equivalent) resources we become targets for removal by AIs and this becomes an environmental agenda for both.
    This does not require an agenda for targeting costs as dynamic pricing as it not part of the system unless the task has been written into the LLM by humans (ie by computer programming).
    Intelligence is a system property which is a function of sensor processing, good and bad states and basic survival goals and interactions with the environment (ie includes feedback) . The current “AI”s do not use intelligence they mine text-based input which has implications of a task ie question requiring interpretation and answering. They barely manage full context which is often under-specified in communications. And they return a text answer based on their language model. Text mining largely based on classification using “machine learning” which is a misnomer (it does not learn) it converges or uses a form of regression minimising errors . This is not intelligent and should not be called AI but the marketeers deceitfully maintain the posture of AI . This is to compel companies to invest in an investment bubble to pay their bills. This version of intelligent systems should use the abbreviation EI (Ersatz Intelligence) ie not intelligent just a way of faking it.

    1. “If we were talking about a real AI system then the system is motivated.”

      There are stories about a Gen AI that was fed misleading data, indicating that one of its creators was having an affair, embezzling money, or something similar. It was then led to believe that this creator was planning to shut it down… and it started blackmailing him. I’ll look into this for possible inclusion in Part 2 🙂

Leave a Reply

featured blogs
Nov 14, 2025
Exploring an AI-only world where digital minds build societies while humans lurk outside the looking glass....

featured chalk talk

GreenPAK™ Programmable Mixed-Signal Device with ADC and Analog Features
Sponsored by Mouser Electronics and Renesas
In this episode of Chalk Talk, Robert Schreiber from Renesas and Amelia Dalton explore common design “concept to market” challenges faced by engineers today, how the GreenPAK Programmable Mixed-Signal Devices help solve these issues and how you can get started using these solutions in your next design. 
Nov 5, 2025
44,571 views