Facebook details policing for sex, terror, hate content

  
Facebook may remove content, add warnings if content may be disturbing to some users while not violating standards, or notify th
Facebook may remove content, add warnings if content may be disturbing to some users while not violating standards, or notify the police in case of a "specific, imminent and credible threat to human life"

Facebook pulled or slapped warnings on nearly 30 million posts containing sexual or violent images, terrorist propaganda or hate speech in the first three months of 2018, the social media giant said Tuesday.

In an unprecedented report responding to calls for transparency after the Cambridge Analytica data privacy scandal, Facebook detailed its actions against such content in line with its "community standards".

Facebook said improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, nearly three times more than it had in the last quarter of 2017.

In 85.6 percent of the cases, Facebook detected the images before being alerted to them by users, said the report, issued the day after the company said "around 200" apps had been suspended on its platform as part of an investigation into misuse of private user data.

The figure represents between 0.22 and 0.27 percent of the total content viewed by Facebook's more than two billion users from January through March.

"In other words, of every 10,000 content views, an estimate of 22 to 27 contained graphic violence," the report said.

Responses to rule violations include removing content, adding warnings to content that may be disturbing to some users while not violating Facebook standards; and notifying law enforcement in case of a "specific, imminent and credible threat to human life".

Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 percent increase. Nearly all were dealt with before any alert was raised, the company said.

It attributed the increase to the enhanced use of photo detection technology.

Facebook apologised in March for temporarily removing an advert featuring French artist Eugene Delacroix's famous work "Lib
Facebook apologised in March for temporarily removing an advert featuring French artist Eugene Delacroix's famous work "Liberty Leading the People" because it depicts a bare-breasted woman

Hate speech is harder to police using automated methods, however, as racist or homophobic hate speech is often quoted on posts by their targets or activists.

Sarcasm needs human touch

"It may take a human to understand and accurately interpret nuances like... self-referential comments or sarcasm," the report said, noting that Facebook aims to "protect and respect both expression and personal safety".

Facebook took action against 2.5 million pieces of hate speech content during the period, a 56 increase over October-December. But only 38 percent had been detected through Facebook's efforts—the rest flagged up by users.

The posts that keep the Facebook reviewers the busiest are those showing adult nudity or sexual activity—quite apart from child pornography, which is not covered by the report.

Some 21 million such posts were handled in the period, a similar number to October-December 2017.

That was less than 0.1 percent of viewed content—which includes text, images, videos, links, live videos or comments on posts—Facebook said, adding it had dealt with nearly 96 percent of the cases before being alerted to them.

Facebook has come under fire for showing too much zeal on this front, such as removing images of artwork tolerated under its own rules.

In March, Facebook apologised for temporarily removing an advert featuring French artist Eugene Delacroix's famous work "Liberty Leading the People" because it depicts a bare-breasted woman.

Explore further: Facebook unveils appeal process for when it removes posts