The story in brief: Microsoft created a self-learning chatbot designed to emulate the speech of Millenials. They let her loose on Twitter, where she immediately got trolled hard by members of one of the most notorious boards of 4chan, and she turned into a massive Hitler-quoting racist. Microsoft took the bot down, and are working hard to remove the worst of her tweets. Oops.
Aside from obvious conclusion that there are some awful people on 4chan, what can the testing community learn from this?
One seems to be that if you carry out testing in a very public space, any testing failures will be very public as well. An artificial intelligence turning into a noxious racist is a pretty spectacular fail in anyone’s books. Given the well-known nature of the bottom half of Twitter, it’s also an all-too-predictable failure; people in my own Twitter feed express very little surprise over what happened. It’s not as if anyone is unaware of the trolls of 4chan and the sorts of things they do.
What they should have done is another question. Tay was a self-learning algorithm that merely repeated the things she’d been told, without any understanding of their social contexts or emotional meanings. She’s like a parrot that overhears too much swearing. It meant that if she fell in with bad company at the start, she’d inevitably go bad.
The most important lesson, perhaps, is that both software designers and testers need to consider evil. Not to be evil, of course, but to think of what evil might do, and how it might be stopped.
The saga of Joshua Goldberg is hard to take in. Here is a prolific troll who managed multiple personae and passed himself off in different spaces as a radical feminist, a white nationalist, a Jihadi supporter of ISIS, a Gamergater, a Zionist and an anti-Semite. He even spent ages arguing with himself on Twitter. I’m wondering if he has two sock puppets fighting both sides of the EM vs P4 wars.
It’s a reminder of just how much of the toxicity of internet discussions is the work of a tiny number of people. It’s also a reminder that many of the worst trolls aren’t true believers in a cause, but just delight in causing mayhem and damage for their own entertainment.
Most of those groups accepted Goldberg as one of their own, since he reliably repeated their memes and talking points. Which makes the “Hurr, hurr, my outgroup fell for him” I’m hearing sound a bit hollow. Your own sect probably fell for him too. As I’ve said before, if your rhetoric so predictable that an outsider can fake it without being immediately recognisable, you have a problem.
Has a successful troll ever passed themselves off as a pragmatic, principled moderate? It’s difficult to imagine, because they would involve laying themselves bare and expressing doubts, something that’s orders of magnitude harder to fake than fanaticism.
It probably ought not to be a surprise that some of the most annoying people on the interweb, from all-round bigot Vox Day to book-burning culture warrior Alex Lifschitz turn out to be trust fund brats. These are people who have either never needed to hold down a proper job in order to lead a comfortable lifestyle, or owe whatever positions they do hold to money and family connections rather than needing to demonstrate any actual ability. They don’t inhabit the same moral or financial universe as the rest of us, and never need to deal with the negative consequences of acting like assholes.
This is what “privilege” means.
The terrible thing is that this isn’t restricted to internet blowhards. Our government is made up of people like this. As the gap between the rich and everyone else grows ever larger in English-speaking world, we can only expect this to get worse.