feature article
Subscribe Now

Our Bots, Ourselves

Does Responsibility for Bots Lie with the Programmer or the User?

“You cannot escape the responsibility of tomorrow by evading it today.” – Abraham Lincoln

Doctor Frankenstein, I presume.

It seemed like such a simple thing. Typical, really. Totally not news. Yet another company had suffered a data breach, leaking its customers’ private information all over the interwebs. Names, credit cards, everything. You know the drill. You’ve probably been subject to such a leak yourself. Sadly, such events aren’t even remarkable anymore.

Except…

This one was a bit different because it involved robots. Okay, nothing so dramatic as clanking mechanical men, but “bots” in the sense of automated software agents that converse with actual humans. Like Eliza, Siri, or Alexa but more advanced. Plenty of companies use bots on their websites to handle simple questions from users. They’re chatbots, programmed to answer simple questions or to escalate to a real human technician if they get confused. Chances are, if you see one of those pop-up chat windows, you’ll actually be texting with a bot (at least initially). Don’t let the picture of the smiling agent with the headset fool you.

For the first time, the US Federal Trade Commission (FTC) has filed a complaint against a company for using bots on their website to trick users into divulging sensitive data. That is, the bots were deliberately designed to deceive, not just to collect harmless data. That’s a lot different from, say, a program that serves a useful purpose but unexpectedly goes awry because of some bug and causes harm. These bots were created for the purpose of cheating. That was their design goal.

According to university law professors Woodrow Hartzog and Danielle Citron, “It is the first such complaint by the FTC that involved bots designed to actively deceive consumers.” It’s one thing to create a Twitter chatbot that acquires hundreds of followers who might not know it isn’t a real person. It’s quite another to maliciously program a bot to commit a crime.

Bots, per se, are fine. Microsoft, Facebook, and other companies offer developer tools to make bot creation easier. But, like any tool, they can be misused. So where does the legal responsibility lie?

Asimov’s First Law of Robotics states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” But the Second Law says a robot must always do what it’s told, unless it conflicts with the First Law. You can’t order a robot to injure someone. Fair enough.

Nobody apparently told the programmers at Ashley Madison, the adult (very adult) dating site that attracted the FTC’s scrutiny. The company leaked all sorts of data about its 36 million users, which was bad enough, given the embarrassingly personal nature of its client activities. But the data breach alone would have merited nothing more than a titillating news item and some awkward conversations, had it not been for the bots. Because Ashley Madison’s bots were actively cajoling users into joining up, paying subscription fees, and entering personal and financial details, they were complicit in the lawsuit (which has been settled).

The legal eagles at the FTC, as well as several states’ attorneys general, made the distinction between a good bot gone bad and one deliberately programmed to deceive, as these ones were. Plenty of bots have gone off the rails, producing sometimes humorous, sometimes horrifying, responses to human conversations. That’s all part of machine learning – and developer learning.

Part of the problem is that bots are supposed to be responsive; they’re supposed to learn and be flexible, and not just spew out canned responses. That very malleability, called emergent behavior, is what makes bots interesting. But it also puts them in a gray area somewhere between mercurial and dangerous.

As Microsoft and others discovered, bots can quickly go “off message” and embarrass their creators. It’s tempting to taunt or bait a bot just to get a reaction from its automated ego. But what if your bot is an official representative of your company? How do you keep it safe and sane and out of court?

One way is to chain it up and artificially limit its vocabulary. “If you ask a boat insurance bot about pizza, it should be designed to say, ‘Sorry, I don’t understand,’” recommends Caroline Sinders, a bot-interaction expert.

Microsoft and other firms recommend that developers program a welcome message into their bots to notify users that they’re talking to a machine, not a person. That way, users’ expectations are tempered somewhat. And, you’ve got a legal loophole in case your bot insults someone.

In a sense, that runs counter to the whole idea of a chatbot: a program that’s so humanlike as to be indistinguishable from the real thing. What fun is a chatbot that acts like a chatbot? Where’s the challenge – or the utility – in artificially limiting its abilities?

It’s pretty obvious that bots and related bits of artificial intelligence (AI) and machine learning are only going to get better – and quickly. That means chatbots are going to be even harder to distinguish from human chatterboxes, and soon. Some are already there. Where does that leave you, your employer, and your legal team?

Clearly, you can’t make a deliberately malicious bot. And we should probably keep branding our artificial assistants as such, a sort of early version of R. Daneel Olivaw (the R stands for Robot). Setting boundaries on your bots’ vocabulary seems like a temporary fix, at best, like embedding debug code. It’s fine for testing, but not for production. And after that? After that, we’ll have to take responsibility for our bots’ actions.

In some societies, parents are responsible for their children’s transgressions. The burden falls on the parents to make sure their offspring (including grown children) behave within the law and societal norms. Child rearing comes with some heavy legal accountability.

I think we’re just about there with bots, too. Our creations will be our responsibility. For now, they’re only acting as if they were autonomous. What happens when they really are? Today they can talk the talk; soon they’ll walk the walk. (And drive the car.) I don’t see any way we can shirk responsibility for our creations’ actions, even if we didn’t program those actions explicitly, and even if the outcomes are unexpected and unwanted. They’re our beasts; we need to either train them, or leash them, accordingly. And that’s going to be very, very hard.

10 thoughts on “Our Bots, Ourselves”

  1. @Jim & @Kevin — WTF … why are you attacking Programmers for being the responsible party again? … Why say “Does Responsibility for Bots Lie with the Programmer or the User?”

    Your question is simply: “Bots, per se, are fine. Microsoft, Facebook, and other companies offer developer tools to make bot creation easier. But, like any tool, they can be misused. So where does the legal responsibility lie?”

    So when a thief uses a screw driver, a crowbar, and hammer to break into a home or business … where does the responsibility lie? In the thief? Or in the company the sold the screw driver, a crowbar, and hammer? Or the mechanical engineer that designed the screw driver, a crowbar, and hammer for not purposefully designing in safeguards to prevent the screw driver, a crowbar, and hammer from being used in a robbery?

    This witch hunt against programmers has to stop.

    And when you start talking about physical robots, stop and remember that all the lines of code are mere fleeting electrons in storage, that can not physically harm anyone. The real physical robot is designed by some group of electrical and mechanical engineers. If it’s a floor wash bot, it’s probably pretty harmless … if it’s a military attack robot, it’s probably NOT so harmless. If the EE and ME used Microsofts robot library to construct the robot’s control systems … who is at fault? The Microsoft programmers that build the robot toolkit library, or the EE and ME that used that library to create a deadly military robot? Or their employer responding to a defense contract procurement? Or the military ground troops deploying the robot in a battle field?

    https://msdn.microsoft.com/en-us/library/bb648760.aspx

  2. Pingback: GVK BIO
  3. Pingback: Bdsm training
  4. Pingback: pezevenk
  5. Pingback: bandar bola online
  6. Pingback: IN Vitro ADME
  7. Pingback: DMPK Lab

Leave a Reply

featured blogs
Jan 15, 2021
I recently saw (what appears at first glance to be) a simple puzzle involving triangles. But is finding the solution going to be trickier than I think?...
Jan 15, 2021
It's Martin Luther King Day on Monday. Cadence is off. Breakfast Bytes will not appear. And, as is traditional, I go completely off-topic the day before a break. In the past, a lot of novelty in... [[ Click on the title to access the full blog on the Cadence Community s...
Jan 14, 2021
Learn how electronic design automation (EDA) tools & silicon-proven IP enable today's most influential smart tech, including ADAS, 5G, IoT, and Cloud services. The post 5 Key Innovations that Are Making Everything Smarter appeared first on From Silicon To Software....
Jan 13, 2021
Testing is the final step of any manufacturing process, and arguably the most important, and yet it can often be overlooked.  Releasing a poorly tested product onto the market has destroyed more than one reputation for quality, and this is even more important in an age when ...

featured paper

Overcoming Signal Integrity Challenges of 112G Connections on PCB

Sponsored by Cadence Design Systems

One big challenge with 112G SerDes is handling signal integrity (SI) issues. By the time the signal winds its way from the transmitter on one chip to packages, across traces on PCBs, through connectors or cables, and arrives at the receiver, the signal is very distorted, making it a challenge to recover the clock and data-bits of the information being transferred. Learn how to handle SI issues and ensure that data is faithfully transmitted with a very low bit error rate (BER).

Click here to download the whitepaper

Featured Chalk Talk

Intel NUC Elements

Sponsored by Mouser Electronics and Intel

Intel Next Unit of Computing (NUC) compute elements are small-form-factor barebone computer kits and components that are perfect for a wide variety of system designs. In this episode of Chalk Talk, Amelia Dalton chats with Kristin Brown of Intel System Product Group about pre-engineered solutions from Intel that can provide the appropriate level of computing power for your next design, with a minimal amount of development effort from your engineering team.

Click here for more information about Intel NUC 8 Compute Element (U-Series)