The media has been awash of late with suggestions that Twitter is dying, because its user base has stopped growing and the share price has fallen. It’s true that it’s nowhere near the size of Facebook. But people were predicting the imminent death of Facebook years ago, but it doesn’t seem to gone away. Twitter’s problem is unrealistic expectations; it’s failed to displace Facebook as the world’s number one social network. But it’s still become something substantial in its own right.
Twitter has probably plateaued now, but has enough of user base to ensure that it’s going to be around for a long time yet. Though not as big as Facebook it’s got a big enough network multiplier effect that people are going to use it in preference to smaller competitors who will struggle to break out of their niches.
Twitter’s biggest problem is that it’s still terrible at dealing with harassment, especially the pile-on attacks you get when someone with a substantial bully pulpit sets their followers on some poor nobody who’s got in their way.
Twitter does need to address this, but there are differing opinions as to exactly how they need to do it.
David Auerbach has called for a radical rethink on how Twitter handles conversations. Meanwhile Kasimir Urbanski suggests that the sky is falling, the authoritarians are taking over and it’s time to create a free speech alternative.
Twitter really has three options
- Do nothing on the grounds that any solution will cause more problems that it will solve.
- Publish much stricter terms of service, and throw a sufficiently large number of human moderators at the problem.
- Do what David Auerbach suggests and devolve moderation to the user level.
The first of those is almost certainly not an option. Despite the protestations of noisy libertarians, Twitter does have a real harassment problem, and it can’t all be dismissed as the whining of bullies who dish it out but can’t take it. It’s true that some activists have a very subjective and highly politicised definition of harassment. It’s true that not all victims are women and not all perpetrators are men. But there is enough evidence to suggest that women pay a far higher price in terms of harassment for expressing remotely controversial opinions. If you still think that’s not a problem, I refer you to the word “privilege” (I dislike the term and it’s often misused, but there are times when it’s still appropriate. This is one of them). And no, third-party block lists are not the solution, they have too high a cost in false positives.
Twitter seems to be going for the second option, and it’s the one place I agree with Kasimir Urbanski, it’s not going to work. Human moderation can work very well for community sites, but only where there is a level of trust between the moderators and the community. Twitter is not a single community but many, many overlapping ones, most of which have few shared values in common. The failure modes of a mass human moderation approach are easy to imagine, and we’re already seeing worrying signs of this. We’ll see high-profile figures perma-banned “pour encourager les autres” because they’ve offended some other high-profile person or group with whom Twitter wants to curry favour. There will be no transparency, and who does and doesn’t get banned for near-identical behaviour will depend on who has the right friends or the right politics. Trust will evaporate.
Which leaves the third option, as proposed by David Auerbach. It’s not actually as radical a change as he suggests it is. It’s just a matter of applying some kind of reputation ranking on who can appear in your notifications, based on who the people you follow have either followed or blocked. They could have some kind of “traffic light” system; Green people are those who plenty of your friends follow and none have blocked. Red people are those many of your friends have blocked, or have accumulated many blocks relative to their tweet and follower counts. Amber people either those for whom not enough information is available, or your friends are divided over whether they follow or block them.
It’s not necessarily perfect, and there is a danger of echo chambers, which have their own problems. Whatever algorithms they use need to be designed to short-circuit anyone who tries to game the system by mass-blocking people they don’t like for reasons other than harassment, and that’s probably easier said than done.