Social Media Blog

Thoughts and rants on social networking.

If I find myself leaving snarky comments against risibly pompous and patronising articles served up by LinkedIn (Really, there was one that suggested your career depends on driving the right car!), then perhaps I really need to get back on Twitter…

Posted on by Tim Hall | Leave a comment

Moderating Twitter

Twitter has a troll problem.

If you’re white, male, not a celebrity and don’t tend to say anything much that’s controversial, then blocking the occasional drive-by troll works perfectly well. If at least one of those things doesn’t apply to you there’s plenty of evidence that Twitter is a little bit broken and better blocking and moderation functionality is needed.

Twitter does have a function to report abuse, but I’m seeing complaints that it’s far too cumbersome, and that has a (possibly deliberate) effect of limiting its use. At least one person has noted that it takes more effort to report an account for abuse than it does for a troll to create yet another throwaway sock-puppet account, a recipe for a perpetual game of whack-a-mole.

In contrast, here’s the Report Abuse form from The Guardian’s online community. There is no real reason why reporting abuse on Twitter needs to be any more complicated than this.

Grainiad Abuse Report
And here’s dropdown listing the reasons. Not all of those would be appropriate for Twitter; “Spam” and “Personal Abuse” certainly are, the others less so.

Grainiad Abuse Report 2
While I approve of Twitter taking a far tougher line against one-to-one harassment, I am not at all convinced that more generalised speech codes are appropriate for a site on the scale of Twitter. Such things are perfectly acceptable and even expected for smaller community sites where it’s part of the deal when you sign up and reflects the ethos behind the site. Indeed, most such community sites are only as good as their moderation, and there are as many where it’s done badly as those where it’s done well. We can all name sites where either lack of moderation or overly partisan moderation creates a toxic environment.

But for a global site with millions of users the idea of speech codes opens a lot of cans of worms which ultimately boil down to power. Who decides what is and isn’t acceptable speech? Whose community values should they reflect? Who gets to shut down speech they don’t like and who doesn’t? I can’t imagine radical feminists taking kindly to conservative Christians telling them what they can or cannot say on Twitter. Or vice versa.

Better to make it easier for groups of people whose values clash so badly that they cannot coexist in the same space to be able to avoid one another more effectively. Yes, there is a danger of creating echo-chambers; as I’ve said before, if you spend too much time in an echo-chamber, then your bullshit detectors cease to function effectively. But Twitter’s current failure mode is in the other direction; pitchfork-wielding mobs who pile on to anyone who dares to say something they don’t like, overwhelming their conversations.

At the moment, the only moderation tool available to individual users is the block function, which is a bit of a blunt instrument, and is only available retrospectively, once the troll has already invaded your space.

There are other things Twitter could implement if they wanted to:

For a start, now that Twitter has threaded conversations, how about adding the ability to moderate responses to your own posts ? Facebook and Google+ both allow you delete other people’s comments below your own status updates. The equivalent in Twitter would be to allow you to delete other people’s tweets that were @replies to your own. If that’s too much against the spirit of Twitter, which it may well be, at least give the power to sever the link so the offending tweet doesn’t appear as part of the threaded conversation.

Then perhaps there ought to be some limits to who can @reply to you in the first place. I’ve seen one suggestion for a setting that prevents accounts whose age is below a user-specified number of days from appearing in your replies tab, which would filter out newly-created sock-puppet accounts. A filter on follower count would have similar effect; sock-puppets won’t have many friends.

Another idea would be to filter on the number of people you follow who have blocked the account. This won’t be as much use against sock-puppets, but will be effective against persistent trolls who have proved sufficiently annoying or abusive to other people in your network.

All of these are things which Twitter could implement quite easily if the will was there. But instead they seem more interested spending their development effort on Facebook-style algorithmic feeds.

Posted in Social Media | Tagged , | Leave a comment

Violet Blue on Facebook

Tech commentator Violet Blue writes about Facebook’s “emotional contagion” experments, and does not mince words, calling them “unethical, untrustworthy, and now downright harmful“:

Everyone except the people who worked on “Experimental evidence” agree that what Facebook did was unethical. In fact, it’s gone from toxic pit of ethical bankruptcy to unmitigated disaster in just a matter of days….

…. Intentionally doing things to make people unhappy in their intimate networks isn’t something to screw around with — especially with outdated and unsuitable tools.

It’s dangerous, and Facebook has no way of knowing it didn’t inflict real harm on its users.

We knew we couldn’t trust Facebook, but this is something else entirely.

Time will tell, but I wonder whether this will turn out to be a tipping point when significant numbers of people conclude that Zuckerberg and co cannot be trusted and seek other ways of keeping in touch online with those they really care about.

It may just be a bizarre coincidence, but I’ve noticed a lot of people I used to know on Facebook showing up as “People I may know” on Google+. Not that Google is much less creepy and intrusive than Facebook.

Posted in Social Media | Tagged , | Comments Off

Facebook: As Creepy As Hell

The media have now picked up on the story of Facebook tinkering with users’ feeds for a massive psychology experiment.

Even if this is technically legal under the small print of Facebook’s Terms of Service, there is no way in hell what they did can be remotely ethical. Although it’s difficult to describe it as a “betrayal of trust” since nobody in their right mind should be trusting this creepy organisation as far as they can throw them.

I really hope this revelation encourages more people to log off from Facebook and find other, better ways of keeping in touch with the people they care about.

Posted in Social Media | Tagged | Comments Off

Operation Lollipop demonstrates Poe’s Law in action

Last week large numbers of supposed feminists on Twitter were exposed as trolls associated with the notorious troll citadel 4chan, and at least two of the most dubious Twitter hashtags, including #EndFathersDay turned out to be their work, part of an orchestrated mass trolling called “Operation Lollipop”.

Everyone writing about the subject naturally concludes that it confirms their existing point of view. Lola Okolosie and Laurie Penny, writing in The Guardian, saw the whole thing as an extinction burst.

The reason sexist trolls fretting alone in their bedrooms are frightened of political women online, particularly women of colour, is the same reason they won’t win. Despite our differences, and even because of our differences, we are powerful, and we are many, and this is our time, not theirs.

Meanwhile the former Communists turned right-libertarians of Spiked Online consider the whole thing to be a useful parody.

There is something pretty pompous about the rigid etiquette of the Twitter activists’ call-out culture that begs to be mocked.

If hashtag activism is easily parodied, then that shows what is wrong with it. By drawing out the excessiveness of hashtags like #SolidarityisforWhiteWomen or #KillAllMen, the 4channers were doing everyone a favour. The wisest point about Twitter was made by playwright Steven Berkoff: if you jump in a dustbin you cannot complain that you are covered in rubbish.

The whole thing does seem like a very good practical demonsration of Poe’s Law, which states:

Without a blatant display of humour, it is impossible to create a parody of extremism or fundamentalism that someone won’t mistake for the real thing.

And that’s true about this. By no means everyone who picked up and ran with their fake hashtags were 4chan’s adolescent racists and sexists.  And quite a few right-wing anti-feminist types, including the arch misogynist Paul Elam fell for it too.

What it has done is exposed the weaknesses of “Hashtag activism”. Twitter’s 140-character limit means activist soundbites are stripped of all context and nuance, and Twitter always tends to magnify the loudest voices at the expense of the wisest. Even well-intentioned hashtags frequently become toxic as more people jump on, and nobody can control or moderate them.

Whether it will lead to any self-reflection remains to be seen. I’m not holding my breath.

Posted in Religion & Politics, Social Media | Tagged , , | Comments Off

Twitter’s 140 character limit is one of its great strengths, most of the time. But the downside is that Twitter is awful at nuance or at things that require context. Which means it all-too-easily turns very toxic in emotionally-charged situations.

Posted on by Tim Hall | Comments Off

Giles Fraser on Forgiveness and the Internet Generation

Insightful piece by Giles Fraser in The Guardian suggesting that the internet generation will be a lot better at forgiveness than older people.

Which is why (I predict) the internet generation is going to end up being a lot better at what we used to be comfortable calling forgiveness. For if we are going to find it more and more difficult to forget, then we are surely going to find it more and more important to forgive. Public figures will no longer be able to delete their messy adolescences, for instance. Which means that we are going to have to learn to deal with our public figures as being more than bland two-dimensional cutouts. We are going to have to accept that they are as human and fallible as the rest of us. This is clearly good: we are simply going to have to learn to be more honest about ourselves and about other people.

I hope he’s right.

On the other hand, there’s plenty of evidence that dark pasts don’t seem to damage political careers; for example the Bullingdon Club behaviour of several Tory members of the cabinet, or former Home Secretary John Reid’s membership of the pro-Stalin Communist Party of Great Britain.

Posted in Religion & Politics, Social Media | Tagged | 2 Comments

Things Twitter could do

FailWhaleI think few people would deny that Twitter has a troll problem. For us regular users with a few hundred followers it’s easy enough to block the occasional drive-by troll, especially if we’re male. But it’s a different story for public figures, especially women, who can find themselves bombarded with hundreds of abusive messages.

Technical solutions for social problems aren’t ideal, but trying to re-educate the sections of the population who live in the bottom half of the internet is at best a very long term project.  In the meantime there are things Twitter could do make it harder for trolls to ruin people’s Twitter experience.

One would be to give users the ability to filter the Notifictations tab. At the moment, anyone you haven’t blocked will be visible in that tab if they @message your username. It’s not technically difficult to filter than by degrees of separation, so what you see in your Connections tab can take into account things like:

  • The number of people you’re following who follow them
  • The number of people you follow who have blocked them
  • The total number of people who have blocked them relative to their number of followers.

Of course it would need to be refined to prevent the trolls themselves from gaming the system. For example, perhaps blocks from those who are very block-happy but have themselves collected a lot of blocks could be disregarded.

Twitter could also crack down on abuse of multiple accounts. There are plenty of legitimate reasons why people need multiple accounts, but it’s well known that trolls often churn through multiple throwaway accounts as each one gets blocked by their targets. Surely it’s not impossible for some kind of pattern-matching on IP addresses and word use to identify which accounts are being used by the same, and deal with them accordingly when any one is suspeded for abuse.

Twitter is very efficient at nuking spam accounts, and they’re pretty easy to identify algorithmically. Dealing with trolls is harder, and will require more human intervention, but that’s no excuse for Twitter to do nothing. As I’ve pointed out, there are plemty of things they could do if the will was there.

Posted in Social Media | Comments Off

Bloom.fm bites the dust?

Bloom Gameover

Sad news on Bloom.fm’s blog

We’ll keep this short because we’re pretty shell-shocked.

It’s game over for Bloom.fm.

Our investor, who’s been along for the ride since day one, has unexpectedly pulled our funding.

It’s come so out of the blue that we don’t have time to find new investment. So, with enormous regret, we have to shut up shop.

This is a poetically crappy turn of events as our young business was showing real promise. Our apps and web player are looking super-nice and we had 1,158,914 registered users in a little over a year. Yep.

A massive thanks to everyone that helped us get this far. We’re absolutely gutted. But it’s been a real pleasure.

A later blog post states that the application will remain running for a few days while they make last ditch attempts to find a buyer.

Coming so soon after the demise of last.fm’s streaming radio, it does make you question the viability of legal online streaming services. Are the labels and collection agencies being too greedy when it comes to licencing? Or do they want startups like Bloom to fail so as not to cannibalise download sales?

Update: In an interview today, Bloom’s Oleg Formenko suggests that all may not be lost, and there are a number of potential buyers in the frame,

Posted in Music News, Social Media | Tagged , , | 2 Comments

Context Collapse

Interesting post on the Software Testing Club on the subject of Context Collapse.

I recently heard the term “context collapse” in a podcast discussing the possible flight of the younger audience from some social media applications. It is unclear who originally coined the term in the early 2000′s, which initially referred generically to the overlapping circles on social media leading to a poster’s inability to focus on a single audience. In the podcast, the meaning was more specifically defined to identify the clash of incompatible social circles: college acquaintances, close friends, family, and work connections (especially management). That incompatibility leads to an abandonment of the media or couching postings in coded terms that are (supposedly) only understood within a specific circle.

Yes, that’s exactly why I decided to leave Facebook. I didn’t realised there was actually a term for it. The post on STC goes on to describe another case of Context Collapse involving accessibility testing, which the team eventually dealt with by getting actual disabled people to test the product. It’s a very interesting read.

Posted in Social Media, Testing & Software | Tagged | Comments Off