Twitter rolls out stricter rules on abusive content

  
Twitter rolls out stricter rules on abusive content
This Wednesday, Oct. 26, 2016, photo shows a Twitter sign outside of the company's headquarters in San Francisco. Twitter will be enforcing stricter policies on violent and abusive content such as hateful images or symbols, including those attached to user profiles, the company announced Monday, Dec. 18, 2017. (AP Photo/Jeff Chiu)

Twitter has begun enforcing stricter policies on violent and abusive content like hateful images or symbols, including those attached to user profiles.

The new guidelines, which were first announced one month ago, were put into place Monday.

Monitors at the company will weigh hateful imagery in the same way they do graphic violence and adult content.

If a user wants to post symbols or images that might be considered hateful, the post must be marked "sensitive media." Other users would then see a warning that would allow them to decide whether to view the post.

Twitter is also prohibiting users from abusing or threatening others through their profiles or usernames.

While the new guidelines became official on Monday, the social media company continues to work out internal monitoring tools and it is revamping the appeals process for banned or suspended accounts. But the company will also begin accepting reports from users.

Users can report profiles, or users, that they consider to be in violation of Twitter policy. Previously, users could only report individual posts they deemed offensive.

Now being targeted are "logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin."

There is no specific list, however, of banned symbols or images. Rather, the company will review complaints individually to consider the context of the post or profile, including cultural and political considerations.

It is also broadening existing policies intended to reduce threatening content, to include imagery that glorifies or celebrates violent acts. That content will be removed and repeat offenders will be banned. Beginning Monday, the company will ban accounts affiliated with "organizations that use or promote violence against civilians to further their causes."

While more content is banned, the company has provided more leeway for itself after it was criticized for strict rules that resulted in account suspensions.

There was a backlash against Twitter after it suspending the account of actress Rose McGowan who opened a public campaign over sexual harassment and abuse, specifically naming Hollywood mogul Harvey Weinstein. Twitter eventually reinstated McGowan's account and said that it had been suspended because of a tweet that violated its rules on privacy.

"In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process," Twitter said in its blog post.

Twitter relies in large part on user reports to identify problematic accounts and content, but the company said it is developing "internal tools" to bolster its ability to police content.

Twitter also seeks to improve communications with users about the decisions it makes. That includes telling those who have been suspended which rules they had violated.

Explore further: Twitter steps up fight against sexual harassment