#twitterpurge: A Worst-Case Scenario That Proves Twitter Needs to Take Responsibility For Its Platform

Share:

#twitterpurge: it’s a hashtag that trended on Twitter over the weekend and remains popular today, inspired by The Purge, a movie that depicts a world in which all crime is legal for one night every year. A slew of awful people took the idea of applying the concept to Twitter as their cue to spend an hour posting anything they wanted on the platform. The result? A whole lot of revenge porn images, many of which appear to depict girls under the age of 18. The thing is, kids: that is illegal, and there’s no magical Purge-esque amnesty to save your hides. It’s remarkable this hasn’t been reported more, because we’re talking about Twitter apparently being used for the mass distribution of images that legally qualify as child pornography. Even more remarkable is the fact that Twitter seems disinclined to take proactive steps to stop it.

The whole sorry #twitterpurge phenomenon is a pretty textbook case study in how things spread on the Internet: it started with a single account set up by, you’ll never believe it, a teenage boy. As per The Guardian, the boy “set up a number of Twitter accounts and the hashtag #twitterpurge to try to replicate the anything-goes phenomenon [of The Purge] on the social networking site.” Although Twitter removed the original #twitterpurge account, the hashtag was soon adopted by people whose idea of purging was to post naked photos of their ex-girlfriends. From there, it spread like wildfire, and suddenly Twitter had a new trending topic: child porn!

In fairness, Twitter does have a policy on what it calls “child sexual exploitation,” the relevant part of which is as follows:

We do not tolerate child sexual exploitation on Twitter. When we are made aware of links to images of or content promoting child sexual exploitation they will be removed from the site without further notice and reported to The National Center for Missing & Exploited Children (“NCMEC”); we permanently suspend accounts promoting or containing updates with links to child sexual exploitation.

This seems well-intentioned, but it’s essentially a reactive policy; it relies on users to make individual reports of posts containing illegal images. As such, it’s ill-equipped to deal with a sudden flood of material, which seems to be what happened here. The policy also places the onus on individual users to report and police the distribution of images, even if the distribution of those images is illegal. This is rather complicated by the fact that the simple fact of viewing these images is arguably illegal — the relevant statute criminalizes anyone who “knowingly produces, distributes, receives, or possesses with intent to distribute” (emphasis mine).

As things stand at the moment, you can happily do a search on the hashtag in question and find images that may well land those circulating them in jail. (Take my word for it, unless you fancy feeling physically ill — quite apart from the nude shots, people are posting godawful gore photos with captions like, “If you don’t RT this is what will happen to you.” Teenage boys really are the worst.)

I contacted Twitter for some clarification as to how their policy is enforced. After less-than-helpfully redirecting me back to the policy itself, a Twitter spokesperson finally told me the following: “We do not proactively monitor content on the platform. Users should report potential violations of our rules through individual Tweets and forms available on our site.” This is the case even when it’s clear that a prominent hashtag (#twitterpurge was trending globally yesterday, and remains in use today) is being used for the distribution of images that are in violation of federal law, not to mention likely to cause a whole lot of distress to those whose private nude photos are getting passed gleefully around the Internet.

This isn’t the first area in which Twitter’s passive handling of sensitive issues has been open to criticism; as you’ll recall from several cases last year, if you’re getting a flood of abuse on Twitter, the necessity of reporting every death threat from new accounts that appear magically to abuse you is… less than ideal. More recently, Ronan Farrow published an op-ed in the Washington Post arguing that social networks should do more to prevent their platforms being used to incite racial and religious violence.

You can think what you want of Farrow’s arguments (he had quite the spat with Glenn Greenwald about them), but it’s notable that he starts from the position that, “Every major social media network employs algorithms that automatically detect and prevent the posting of child pornography.” If Twitter does use any such system, it failed here — and it’s easy to imagine why, from the fact that the photos depicted teens rather than pre-pubescent children to the sheer volume of pornographic material #twitterpurge generated.

If the images aren’t caught by any sort of automated system, the responsibility for reporting them falls back on the user. It’s worth noting that Twitter has no legal obligation to do anything more — as Farrow points out in his piece, “Section 230 of the Telecom Act of 1996 inoculates these companies from responsibility for content that users post — as long as they don’t know about it.” When they are told about it, they pull it down, as they’re required to. The result is that the wronged party — in this case, the unfortunate girls whose images are being circulated — has to play an endless game of whack-a-mole while the network being used to circulate the images carries on blithely as if nothing’s happening.

And that, basically, is that. Is it good enough? No, I don’t think it is. It’s also worth noting that in choosing to maintain any sort of safety net for catching illegal images, Twitter is already making a moral decision to go further than it’s legally required to. But what we have at the moment is a half-measure, a system that can quickly be overwhelmed by a flood of anonymous accounts and viral retweets.

In this case, the sheer volume of images, not to mention the potential for anonymity on Twitter, likely means that the vast majority of people who posted images as part of the #twitterpurge will go unpunished: while the initial account was deleted under Twitter’s anti-child exploitation policy, the horse had long since bolted. Can Twitter do more? Should it do more?

The issues around censorship on the Internet, and the question of policing platform content, will continue to subjects of public debate — free speech is, of course, a concept dear to the heart of Americans, and anything that looks vaguely like censorship is met with suspicion, especially if it seems like it might be the thin end of the wedge. Farrow’s piece quotes an anonymous social media company employee: “The second we get into reviewing any content ourselves, record labels say, ‘You should be reviewing all videos for copyright violations, too.'”

But in this case, we’re not just talking about content that’s violating copyright — a link to a Game of Thrones episode download or a cracked version of the latest Grand Theft Auto. This is material that can be quite literally a case of life and death. I’d argue that while Twitter has no legal obligations here, it sure as hell has moral ones. It’s both unfair and unethical to palm off the responsibility for policing Twitter onto individual users, when it’s Twitter itself that has the best resources to do so.