By Kashmir Hill. Robots are starting to break the law, the law is trying to figure out what to do about it, and it all seems to be happening in Europe. Last month, Swiss authorities seized the Random Darknet Shopper art exhibit which included weekly purchases made by an automated bot given Bitcoin to surf a Dark Web marketplace. (It mainly bought drugs.) This week, police in the Netherlands are dealing with a robot miscreant. Amsterdam-based developer Jeffry van der Goot reports on Twitter that he was questioned by police because a Twitter bot he owned made a death threat.
Van der Goot’s bot used his own tweets as fodder, taking random chunks of them and trying to recombine them into new sentences that made sense. According to van der Goot, the bot tweeted something that sounded like a threat which mentioned an upcoming event in Amsterdam. Best of all, the bot was responding to another bot, according to van der Goot. He is not identifying the bot and says he has deleted it, per the request of the police. If this is not a hoax, this may be the first time police had to respond because of a robot-on-robot threat of violence. (Read more about the death threat from the Twitter bot HERE)
Who Do We Blame When Robots Make Death Threats?
By Kashmir Hill. Last week, police showed up at the home of Amsterdam Web developer Jeffry van der Goot because a Twitter account under van der Goot’s control had tweeted, according to the Guardian, “I seriously want to kill people.” But the menacing tweet wasn’t written by van der Goot; it was written by a robot.
The police didn’t press charges. They just asked van der Goot, 28, to delete the account. The bot account only exists now as a cached page; the offending tweet has completely disappeared from the Internet’s surprisingly imperfect memory. It was a brief blip in the Twitter OMG machine, but the episode raises a fascinating and increasingly pressing question in these times of independent algorithms: Who is to blame when a robot does bad things? . . .
In this case, the bot itself got punished. It was killed off by its owner for its transgression at the urging of police. In the robot world, you can get the death penalty for a speech offense. Harsh! Who will stand up for robot civil liberties?
“Information by itself can commit a crime now,” Calo said by phone. If it is indeed a crime. A Twitter bot saying it wants to kill people isn’t really a threat because that bot can’t show up with a gun in a dark alley. (At least not yet.) But somebody on the receiving end of that threat could take it seriously not knowing that it’s a blustering bot. Here in the U.S., a yet-undecided Supreme Court case deals with exactly this issue: whether a man’s Facebook post with violent Eminem lyrics — that was interpreted as threatening by his ex-wife — is a true threat that can get him into legal trouble if he didn’t actually intend to hurt her. It would be much easier for American bots (and their owners) if the Supreme Court rules that empty threats are constitutionally protected.
I asked Calo if he thought any humans should take the fall for van der Goot’s bot, if it came to that. “I don’t know,” he said. “The law has to come up with a thing to do. It would probably look at the person who put the technology into play. (Ed. note: bot owner van der Goot.) If someone builds a general purpose tool — Ed. note: bot builder Hertling — you can’t go after them. In criminal law, you can’t go after person breeding a dangerous dog, but the person who lets it loose.” (Read more from this story HERE)