Twitter CEO Promises to Fight Trolls — But Let’s Not Start Celebrating Yet

In the annals of stating the obvious, the internal Twitter forum post leaked to The Verge this week is destined to go down as a classic of the genre. “We,” proclaimed Twitter CEO Dick Costolo, “suck at dealing with abuse and trolls on the platform.” Why yes, Dick. You do. You really, really do. The rest of the memo has been reported as a sort of internal mea culpa: “I’m frankly ashamed of how poorly we’ve dealt with this issue during my tenure as CEO,” Costolo wrote, adding that “I take full responsibility for not being more aggressive on this front. It’s nobody else’s fault but mine, and it’s embarrassing.” Perhaps the most instructive line, however, is the following observation: “We lose core user after core user by not addressing simple trolling issues that they face every day.”

I don’t want to be cynical about this. As much as one can tell from the tone of a leaked forum post, Costolo sounds genuinely contrite and frustrated about the fact that Twitter has become the #1 vehicle for Internet harassment: he says that he “take[s] full responsibility for not being more aggressive [against trolls]” and claims that “everybody on the leadership team knows this is vital.” The fact that the company’s CEO is at least acknowledging the problem and talking the talk about fixing it is encouraging.

But we’ve been here before. After Robin Williams’ daughter Zelda quit Twitter last year because Internet bottom feeders thought it’d be hilarious to send her photoshopped images of her father, Twitter promised to expedite the reporting process for abuse, and also make the management of block lists easier. Did it work? If you’ve ever tried to report abuse on Twitter, you already know the answer: nope. Barely two weeks later, Lindy West — who has been in the news of late for confronting a harasser who posed online as her dead father — published a catalog of abusive tweets threatening doxxing, rape, violence, and murder, none of which Twitter had deemed to be in violation of its terms of service.

The terms of service are a big part of the problem here. The issue of “threats and abuse” is dealt with in the the company’s abusive behavior policy, which specifies that, “Users may not make direct, specific threats of violence against others… Targeted abuse or harassment is also a violation [of these rules].” The important part here is the “direct and specific” wording, which has been interpreted as narrowly as possible by Twitter. Last year, Wired quoted one of the company’s PR representatives elaborating on this: “As one of the company’s PR representatives, Jim Prosser, explains, ‘It’s not just that something should happen to you; it’s that something is going to happen to you. Where it will happen, from what, with what. Rather than just “I hate you, go die in a fire.” You have something more specific there.'”

The Wired article also pointed out that this policy was patently absurd, citing a case where Anita Sarkeesian had reported a tweet that said, “I will rape you when I get the chance,” but was told the threat didn’t violate Twitter’s rules because it “didn’t meet the criteria of an actionable threat.” (Do you have to tweet, “I will rape you in [place] at [time]” before Twitter considers the threat “actionable”?) The company’s terms of service remain the same now as they were then, and until they change, it’s hard to see how any measure can be taken against Twitter-based online harassment, given that the company’s own rules deem that abuse to be acceptable.

So there are two questions here: Is Twitter really serious about walking the walk? And even if it is, how exactly is it going to do so? Quite how to fix Twitter, and whether it is fixable at all, has been a subject of much debate — last October, Slate’s David Auerbach argued that “Twitter is, for better and for much worse, a unified public space… [and] worse, its design stresses conflict and impedes consensus.” I don’t agree with all Auerbach’s contentions, but I think he’s correct on that point: the very nature of Twitter makes it a perfect forum for abuse and harassment, and it’s difficult to see how that will change without a fundamental change in what Twitter is.

The characteristics that make Twitter so popular are also those that make it so ripe for abuse: anonymity, ease of joining, brevity, ubiquity, and direct access to people you otherwise have no way of contacting, especially celebrities. Where else can you sign up, create an account, and send an anonymous death threat directly to a celebrity you dislike? Change any one of these aspects of Twitter — requiring a “real” identity for sign-up, like Facebook does, or disabling @ responses for new users, etc. — and you fundamentally change the nature of the service. If you’re Twitter, you only do this if you’re forced to.

And the way to force it to is, as Costolo’s post indicates, via its “core users.” Twitter probably doesn’t care if you or I quit because people are being obnoxious to us. If it’s Robin Williams’ daughter, though… that’s different. I’ve argued before that Twitter’s verified celebs are its lifeblood, and if it had been Taylor Swift who ended up fleeing her home because of threats of violence, rather than Anita Sarkeesian, I’m sure we would have seen much more definitive action. When Swift’s Twitter account was hacked last week, Twitter bent over backwards to help: as the singer herself wrote on Tumblr, “Twitter is deleting the hacker tweets and locking my account until they can figure out how this happened and get me new passwords.” It must be nice to get the white-glove treatment that comes with being a celebrity; the rest of us get to wade through the non-specific threats on our own.

Still, the Swift incident shows that when there’s a will, Twitter can get itself into gear and take action instead of hiding behind poorly drafted policies and nebulous PR statements. If Costolo’s message is genuine, it suggests that there’s some hope of Twitter making an effort to clean house — but you’ll forgive me for keeping the cork in the champagne bottle until we actually see some action.