feature article
Subscribe Now

Our Bots, Ourselves

Does Responsibility for Bots Lie with the Programmer or the User?

“You cannot escape the responsibility of tomorrow by evading it today.” – Abraham Lincoln

Doctor Frankenstein, I presume.

It seemed like such a simple thing. Typical, really. Totally not news. Yet another company had suffered a data breach, leaking its customers’ private information all over the interwebs. Names, credit cards, everything. You know the drill. You’ve probably been subject to such a leak yourself. Sadly, such events aren’t even remarkable anymore.

Except…

This one was a bit different because it involved robots. Okay, nothing so dramatic as clanking mechanical men, but “bots” in the sense of automated software agents that converse with actual humans. Like Eliza, Siri, or Alexa but more advanced. Plenty of companies use bots on their websites to handle simple questions from users. They’re chatbots, programmed to answer simple questions or to escalate to a real human technician if they get confused. Chances are, if you see one of those pop-up chat windows, you’ll actually be texting with a bot (at least initially). Don’t let the picture of the smiling agent with the headset fool you.

For the first time, the US Federal Trade Commission (FTC) has filed a complaint against a company for using bots on their website to trick users into divulging sensitive data. That is, the bots were deliberately designed to deceive, not just to collect harmless data. That’s a lot different from, say, a program that serves a useful purpose but unexpectedly goes awry because of some bug and causes harm. These bots were created for the purpose of cheating. That was their design goal.

According to university law professors Woodrow Hartzog and Danielle Citron, “It is the first such complaint by the FTC that involved bots designed to actively deceive consumers.” It’s one thing to create a Twitter chatbot that acquires hundreds of followers who might not know it isn’t a real person. It’s quite another to maliciously program a bot to commit a crime.

Bots, per se, are fine. Microsoft, Facebook, and other companies offer developer tools to make bot creation easier. But, like any tool, they can be misused. So where does the legal responsibility lie?

Asimov’s First Law of Robotics states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” But the Second Law says a robot must always do what it’s told, unless it conflicts with the First Law. You can’t order a robot to injure someone. Fair enough.

Nobody apparently told the programmers at Ashley Madison, the adult (very adult) dating site that attracted the FTC’s scrutiny. The company leaked all sorts of data about its 36 million users, which was bad enough, given the embarrassingly personal nature of its client activities. But the data breach alone would have merited nothing more than a titillating news item and some awkward conversations, had it not been for the bots. Because Ashley Madison’s bots were actively cajoling users into joining up, paying subscription fees, and entering personal and financial details, they were complicit in the lawsuit (which has been settled).

The legal eagles at the FTC, as well as several states’ attorneys general, made the distinction between a good bot gone bad and one deliberately programmed to deceive, as these ones were. Plenty of bots have gone off the rails, producing sometimes humorous, sometimes horrifying, responses to human conversations. That’s all part of machine learning – and developer learning.

Part of the problem is that bots are supposed to be responsive; they’re supposed to learn and be flexible, and not just spew out canned responses. That very malleability, called emergent behavior, is what makes bots interesting. But it also puts them in a gray area somewhere between mercurial and dangerous.

As Microsoft and others discovered, bots can quickly go “off message” and embarrass their creators. It’s tempting to taunt or bait a bot just to get a reaction from its automated ego. But what if your bot is an official representative of your company? How do you keep it safe and sane and out of court?

One way is to chain it up and artificially limit its vocabulary. “If you ask a boat insurance bot about pizza, it should be designed to say, ‘Sorry, I don’t understand,’” recommends Caroline Sinders, a bot-interaction expert.

Microsoft and other firms recommend that developers program a welcome message into their bots to notify users that they’re talking to a machine, not a person. That way, users’ expectations are tempered somewhat. And, you’ve got a legal loophole in case your bot insults someone.

In a sense, that runs counter to the whole idea of a chatbot: a program that’s so humanlike as to be indistinguishable from the real thing. What fun is a chatbot that acts like a chatbot? Where’s the challenge – or the utility – in artificially limiting its abilities?

It’s pretty obvious that bots and related bits of artificial intelligence (AI) and machine learning are only going to get better – and quickly. That means chatbots are going to be even harder to distinguish from human chatterboxes, and soon. Some are already there. Where does that leave you, your employer, and your legal team?

Clearly, you can’t make a deliberately malicious bot. And we should probably keep branding our artificial assistants as such, a sort of early version of R. Daneel Olivaw (the R stands for Robot). Setting boundaries on your bots’ vocabulary seems like a temporary fix, at best, like embedding debug code. It’s fine for testing, but not for production. And after that? After that, we’ll have to take responsibility for our bots’ actions.

In some societies, parents are responsible for their children’s transgressions. The burden falls on the parents to make sure their offspring (including grown children) behave within the law and societal norms. Child rearing comes with some heavy legal accountability.

I think we’re just about there with bots, too. Our creations will be our responsibility. For now, they’re only acting as if they were autonomous. What happens when they really are? Today they can talk the talk; soon they’ll walk the walk. (And drive the car.) I don’t see any way we can shirk responsibility for our creations’ actions, even if we didn’t program those actions explicitly, and even if the outcomes are unexpected and unwanted. They’re our beasts; we need to either train them, or leash them, accordingly. And that’s going to be very, very hard.

10 thoughts on “Our Bots, Ourselves”

  1. @Jim & @Kevin — WTF … why are you attacking Programmers for being the responsible party again? … Why say “Does Responsibility for Bots Lie with the Programmer or the User?”

    Your question is simply: “Bots, per se, are fine. Microsoft, Facebook, and other companies offer developer tools to make bot creation easier. But, like any tool, they can be misused. So where does the legal responsibility lie?”

    So when a thief uses a screw driver, a crowbar, and hammer to break into a home or business … where does the responsibility lie? In the thief? Or in the company the sold the screw driver, a crowbar, and hammer? Or the mechanical engineer that designed the screw driver, a crowbar, and hammer for not purposefully designing in safeguards to prevent the screw driver, a crowbar, and hammer from being used in a robbery?

    This witch hunt against programmers has to stop.

    And when you start talking about physical robots, stop and remember that all the lines of code are mere fleeting electrons in storage, that can not physically harm anyone. The real physical robot is designed by some group of electrical and mechanical engineers. If it’s a floor wash bot, it’s probably pretty harmless … if it’s a military attack robot, it’s probably NOT so harmless. If the EE and ME used Microsofts robot library to construct the robot’s control systems … who is at fault? The Microsoft programmers that build the robot toolkit library, or the EE and ME that used that library to create a deadly military robot? Or their employer responding to a defense contract procurement? Or the military ground troops deploying the robot in a battle field?

    https://msdn.microsoft.com/en-us/library/bb648760.aspx

  2. Pingback: GVK BIO
  3. Pingback: Bdsm training
  4. Pingback: pezevenk
  5. Pingback: bandar bola online
  6. Pingback: IN Vitro ADME
  7. Pingback: DMPK Lab

Leave a Reply

featured blogs
Oct 3, 2024
Someone with too much time on his hands managed to get Linux to boot on an Intel 4004 in only 4.76 days...

featured paper

A game-changer for IP designers: design-stage verification

Sponsored by Siemens Digital Industries Software

In this new technical paper, you’ll gain valuable insights into how, by moving physical verification earlier in the IP design flow, you can locate and correct design errors sooner, reducing costs and getting complex designs to market faster. Dive into the challenges of hard, soft and custom IP creation, and learn how to run targeted, real-time or on-demand physical verification with precision, earlier in the layout process.

Read more

featured chalk talk

Advanced Gate Drive for Motor Control
Sponsored by Infineon
Passing EMC testing, reducing power dissipation, and mitigating supply chain issues are crucial design concerns to keep in mind when it comes to motor control applications. In this episode of Chalk Talk, Amelia Dalton and Rick Browarski from Infineon explore the role that MOSFETs play in motor control design, the value that adaptive MOSFET control can have for motor control designs, and how Infineon can help you jump start your next motor control design.
Feb 6, 2024
41,901 views