Software unveiled to tackle online extremism, violence

A software tool unveiled Friday aims to help online firms quickly find and eliminate extremist content used to spread and incite violence and attacks.

The Counter Extremism Project, a nongovernment group based in Washington, proposed its software be used in a system similar to one used to prevent the spread on online child pornography.

The software was developed by Dartmouth University computer scientist Hany Farid, who also worked on the PhotoDNA system which is now widely used by Internet companies to stop the spread of content showing sexual exploitation or pornography involving children.

The announcement comes amid growing concerns about radical jihadists using social networks to diffuse violent and gruesome content and recruit people for attacks.

"We think this is the technological solution to combat online extremism," said Mark Wallace, chief executive of the organization that includes former diplomats and public officials from the United States and other countries.

The group proposed the creation of an independent "National Office for Reporting Extremism" that would operate in a similar fashion to the child pornography center—identifying and flagging certain content to enable online firms to automatically remove it.

This system, if adopted by Internet firms, "would go a long way to making sure than online extremist is no longer pervasive," Wallace told a conference call with journalists.

He said it could be useful in stopping the "viral" spread of videos of beheadings and killings such as those produced by the Islamic State (IS) organization.

Wallace said he expects "robust debate" on what is acceptable content but added that "I think we could agree that the beheading videos, the drowning videos, the torture videos... should be removed."

'Collaborative discussions'

Wallace said the group has had "collaborative discussions" with Internet firms and that "there has been a lot of interest."

But he added that it would be up to each online operator to determine whether to use the software and how to implement it.

Farid, who also spoke on the call, said he believes the new system would be an effective tool for companies which must now manually review each complaint on objectionable content.

"We are simply developing a technology that allows companies to accurately and effectively enforce their terms of service," Farid said.

"They do it anyway, but it's slow."

The system is based on "robust hashing" or finding so-called digital signatures of content of text, images, audio and video that can be tracked to enable platforms to identify and stop content from being posted or reposted.

"The technology has been developed, it has been tested and we are in the final stages of engineering to get it ready for deployment," Farid said. "We're talking about a matter of months."

Social networks have long stressed they will help legitimate investigations of crimes and attacks, but have resisted efforts to police or censor the vast amounts content flowing through them.

Farid said he developed the software with a grant from Microsoft, and that he and the Counter Extremism Project would work to provide it to online companies.

Explore further: EU to press Internet giants over online extremism