Trends |

Why Do We Think Robots Want to Kill Us? 03.08.2020

The Boston Dynamics robots tend to elicit awe and horror in equal amounts. What’s there to fear, you ask? Popular culture says plenty.

Once owned by Google and today a subsidiary of Softbank, Boston Dynamics is an American robotics company. The videos the company posted online, most of which went viral, allowed the public to closely follow the development of its two prototypes—one is an android and the other a robot dog. The latter has recently been made commercially available—nicknamed “Spot,” the canine robot will run you a cool $74,500, but it can climb stairs, traverse difficult terrain, and is small enough to be used for a variety of tasks and missions. One of its first deployments saw it used for patrol duties in the early stages of the COVID-19 pandemic.

Killer Dog

Spot was designed by engineers with functionality in mind. I call it “dog” purely because it’s about the size of a large canine and is quadripedal; that’s where the similarities end—without anything we’d consider a head, it looks more like a sleek industrial robot than anything else, a machine made to work. Even its color scheme—black and yellow—is industrial at heart, used for hammers and marking up sharp edges on factory floors. It stands in stark contrast to AIBO, the smart dog sold by Sony,  which was designed to serve as a lovable electronic companion rather than penetrate hallways in chemically contaminated factories or carry loads over rough and difficult terrain. AIBO’s capacity for play and emotional expression is supposed to reinforce the illusion of attachment.

Improving the robot dog’s public image was entrusted to Aigency, which has been relying heavily on TikTok in its push to make Spot more approachable. But is poor old Spot’s lack of emotional capacity enough to explain why some fear it so instinctively, a reaction that AIBO never seems to elicit? I don’t think so. I don’t remember every seeing anyone terrified by industrial robots which can, after all, be very dangerous (to personnel disregarding health and safety guidelines) and have even been trained to fight each other with Japanese katana swords. Their unwieldy bodies, however, bolted to the floor, neutralize much of the dread they might inspire. Spot, however, is terrifyingly capable. Capable of walking, climbing stairs and over obstacles, and capable of getting up when knocked down. Like the Terminator, it simply won’t stay down.

Czarne lustro – Twardogłowy | Oficjalny zwiastun | Netflix

The notoriously technophobic TV series Black Mirror features an episode called “Metalhead,” a story about headless robot dogs prowling the post-apocalyptic Earth in search of surviving humans to wipe out. A quick, knee-jerk answer to the public’s fears: “What’s going to happen when someone decides to fit these robot dogs with automatic weapons? Oh no, it’s a Terminator!” That was precisely the reaction that the one of the Boston Dynamics’ viral videos, featuring an earlier version of Spot, drew from none other than Rafał Brzoska, the CEO of Inpost. Rather than just a robot dog, he saw a platform that could be easily retooled into a murder machine. 

Limits of the Metaphor

Terminator and other works of science fiction have embraced the robot murderer trope because it’s a perfect metaphor for the soulless killer, devoid of any human emotion. Curiously, James Cameron did not intend his film to be a horror with an “electronic assassin” at the center—the film sought to interrogate and express the prevalent fears of the Cold War era, when much of the world lived in anticipation of a mass nuclear attack. At its core, the film argued that people could just as well turn into soulless killing machines. After the button is pushed, sending the missiles on their way, the game is over—there’s no more room for respite or negotiation. WarGames, meanwhile—an essentially upbeat story which argues that the meaninglessness of war can be made sense of through a game of tic-tac-toe—explored the significance of the moment before the annihilation switch is flipped.

Terminator 2, however, told a different story—this time, the robot murderer sent from the future is clad in a police uniform, which was Cameron’s way of critiquing the brutality of American law enforcement. “Cops think of all non-cops as less than they are, stupid, weak, and evil. They dehumanize the people they are sworn to protect and desensitize themselves in order to do that job,” Cameron said in a 2010 interview. You don’t need a robot dog with automatic weapons to deal violence to your fellow man. All you need is the feeling that you’re Robocop.

The problem with overwrought metaphors is that they’re metaphors and the audience does not always end up fearing what they are supposed to symbolize—instead, they start to fear the metaphor itself, the killer robot in this case. Androids prompt mistrust—they’re imposters trying to pass as human; we also fear the way they think, a fear that was explored in the Will Smith-helmed vehicle I, Robot, loosely based on the work of writer Isaac Asimov. The protagonist, played by Smith, hates robots and can’t bring himself to trust their kind, even though everyone around him seems to rely on them for help and service. Why? Because a robot once saved his life over that of a little girl—the robot calculated that her chance of surviving her drowning were smaller than his. Thus, the coldly calculating thinking machine becomes an object of hatred. The fear seems justified, but for some reason the protagonist’s hate does not extend to algorithms, only android. His trauma did not stop him from using autonomous cars. Was it because the car lacked human features?

The Three Laws of Robotics

Fear of the robot turning against its master and creator, in the vein of Frankenstein’s monster, is one of science fiction’s oldest tropes. Back in 1942, Isaac Asimov and editor John W. Campbell formulated the Three Laws of Robotics (which played a key role in the story of the Will Smith film), and laid them out in Asimov’s story Runaround:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

If the laws were real, there would never have been a Terminator, the killer robot Hector from the science fiction horror film Saturn 3, they would have prevented the eponymous Matrix or the rise of Cylons from Battlestar Galactica. Killer robots are and will be a part of our reality, however, because no law will be able to save us from buggy code. Janusz Wojciechowski’s Nowoczesne zabawki [Modern Toys], a favorite of mine from childhood, which proved deeply formative for me, dealt with just such situations. Alongside paeans to Soviet cybernetics and miraculous consumer electronics, the book featured a chapter exploring the history of robots—of which the most memorable part, at least to me, were the killer robots. The list included one heavy unit that went haywire and squashed its creator against a wall, a robot with faultily attached arm that ended up falling on an engineer’s head and killing him, and an electronic Father Frost, which went berserk after a power surge and demolished the exhibition it was displayed on. In none of these instances did a robot break the First Law.

Boston Dynamics "Spot" and Softbank Robotics "Pepper" Collaborative Robot dance

Bugs in code and design flaws are what we should be worried with. I have a couple of robot toys at home, including a Lego Mindstorms android and my beloved Roomba, which keeps my apartment clean so I don’t have to. This is the future we ended up with—we were promised Terminators, what we got was gizmos keening “Error, error!” whenever they come across a shoelace. Naturally, even a seemingly helpless Roomba can do some real damage—the Internet is full of pictures of them smearing feces across entire rooms, while my friend’s unit suffered a failure of its edge detection subroutine and ended up falling to the floor from a mezzanine. I shudder to think what would have happened if it fell on someone’s head. 

Invisible Killer

We fear androids and robot dogs because their appearance is too familiar. The horror that Spot evokes in our hearts is just the cyberpunk version of the fear of the big bad wolf from Red Riding Hood. What we should fret about, meanwhile, are algorithms—the invisible hand driving much of the decisions beyond human control. A while ago, Gazeta Wyborcza published Piotr Szostak’s undercover investigation for which he worked illegally as a deliveryman for Uber Eats: “The app can be wrong. Once it sent me to pick up a pizza, but when I pulled up the staff told me that I was already the fourth deliveryman asking about an order that was picked up by the first deliveryman.” The company is obviously sorry, but shuns any responsibility, hiding behind the algorithm and its decisions.

But before the algorithm can make its first decision, it has to be designed, written, and deployed, with humans involved at every step of the process. Often enough, bad algorithms are the product of badly designed processes (eg. when the Apple Watch engineers somehow forgot that roughly half of the human population menstruates) or engineer incompetence. These are the things that ought to scare us and that we ought to take under strict control before anyone clicks “start” and unleashes the killer code into the world.

see also

discover playlists