Facebook: We're better at policing nudity than hate speech

  
Facebook: We're better at policing nudity than hate speech
In this May 16, 2012, file photo, the Facebook logo is displayed on an iPad in Philadelphia. Facebook believes its policing system is better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda from its social network than it is at removing racist, sexist and other hateful remarks. The self-assessment on Tuesday, May 15, 2018, came three weeks after Facebook tried to give a clearer explanation of the kinds of posts that it won't tolerate. (AP Photo/Matt Rourke, File)

Getting rid of racist, sexist and other hateful remarks on Facebook is challenging for the company because computer programs have difficulties understanding the nuances of human language, the company said Tuesday.

In a self-assessment, Facebook said its policing system is better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda. Facebook said automated tools detected 86 percent to 99.5 percent of the violations in those categories.

For hate speech, Facebook's human reviewers and computer algorithms identified just 38 percent of the violations. The rest came after Facebook users flagged the offending content for review.

Tuesday's report was Facebook's first breakdown of how much material it removes. The statistics cover a relatively short period, from October 2017 through March of this year, and don't disclose how long, on average, it takes Facebook to remove material violating its standards. The report also doesn't cover how much inappropriate content Facebook missed.

Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language.

Facebook took down 3.4 million pieces of graphic violence during the first three months of this year, nearly triple the 1.2 million during the previous three months. In this case, better detection was only part of the reason. Facebook said users were more aggressively posting images of violence in places like war-torn Syria.

The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users. The content screening has nothing to do with privacy protection, though, and is aimed at maintaining a family-friendly atmosphere for users and advertisers.

The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to influence the 2016 elections.

Facebook previously estimated fake accounts as accounting for 3 percent to 4 percent of its monthly active users. Tuesday's report said Facebook disabled 583 million fake accounts during the first three months of this year, down from 694 million during the previous quarter. Facebook said the number tends to fluctuate from quarter to quarter. Facebook said more than 98 percent of the accounts were caught before users reported them.

Explore further: Facebook details policing for sex, terror, hate content