Removing illegal web pages isn’t enough. Credit: Thomas Holt/Shutterstock YouTube has, yet again, failed to protect children online. Recent investigations by Wired and video blogger Matt Watson have alleged that paedophiles were using the site's comments section to leave predatory messages on videos containing and uploaded by children, and to share links to child sexual abuse material.
In response to the investigations – and the threat of an advertiser boycott – YouTube has now said it will disable comments on videos containing young children. But sadly, this is not an isolated incident. In January 2019 was alleged that Microsoft's Bing search engine was surfacing and suggesting child sexual abuse material. And these kind of incidents are repeats of similar problems that have occurred over the past five years.
The reality is that the internet has a systemic problem with child sexual abuse material that isn't confined to niche sites or the dark web but hiding in plain sight among content hosted and controlled by the tech giants. We must do more to protect children online and this action has to go beyond tweaks to algorithms or turning off comments.
In 2016, more than 57,000 web pages containing child sexual abuse images were tracked by the Internet Watch Foundation – a UK-based body that identifies and removes such illegal content. This was an increase of 21% from the previous year. The US-based National Center for Missing and Exploited Children received more than 10m reports of child sexual abuse content in 2017, an increase of 22% from the previous 12 months. It's likely that these initiatives, while much needed, are identifying and removing only a small amount of the content that is distributed online every day.
Images depicting child abuse that are posted online have a severe impact on these abused children for years or decades after the primary physical abuse has ended. Abused children have already been victimised, but research shows that the availability of their images online keeps the nightmare alive for the child, their family and friends. It can also significantly affect a victim's interaction with the internet for the rest of their lives.
Technology companies are uniquely positioned to act as guardians of the threshold by removing and reporting sexually explicit content that is uploaded onto their services. So why don't they do more to aggressively protect millions of children around the world?
Even in the early days of the web, it was clear that services provided by technology companies were being used to spread child sexual abuse content. As early as 1995, the chatrooms of AOL – an early incarnation of social media – were allegedly used to share child abuse material. In response, AOL executives at the time claimed that they were doing their best to rein in abuses on their system but that their system was too large to manage. This is precisely the same excuse that we hear more than two decades later from the titans of tech.
Between 2003 and 2008, despite repeated promises to act, major tech companies failed to develop or use technology that could find and remove illegal or harmful content, even though it violated their terms of service. Then in 2009, Microsoft worked with National Center for Missing and Exploited Children and a team at Dartmouth College that included one of us (Hany Farid) to develop the technology photoDNA. This software quickly finds and removes known instances of child sexual content as it is uploaded, and has been provided at no cost to technology companies participating in the initiative.
After years of pressure, photoDNA is now used by many web services and networks. But technology firms have since failed to further innovate to respond to an increasingly sophisticated criminal underworld. For example, despite foreseeing the rise in child abuse videos, tech firms haven't yet deployed systems that can identify offending footage like photoDNA can do for images.
These companies need to act more quickly to block and remove illegal images, as well as responding to other activity that enables and encourages child exploitation. This means continually developing new technologies, but also fundamentally rethinking the perverse incentive of making money from user content, regardless of what that content actually is.
Standing in the way of control
However, a combination of financial, legal and philosophical issues stand in the way of tech firms reining in illegal activities on their services. In the first instance, removing content is in many cases simply bad for business because it reduces opportunities for advertising revenue and gathering user data (which can also be sold).
Meanwhile, the law often absolves tech firms of responsibility for the content they host. In the US, Section 230 of the Communications Decency Act gives tech firms broad immunity from prosecution for the illegal activities of their users. This immunity relies on categorising the likes of YouTube or Facebook as benign "platforms" as opposed to active "publishers". The position in the EU is similar. What's more, some tech companies believe that illegal activity is a state responsibility, rather than a corporate one.
Given the size, wealth and reach of the tech giants, these excuses don't justify inaction. They need to pro-actively moderate content and remove illegal images that have been uploaded to their sites. They could and should help to inform research in this vital area of child safety, working with law enforcement and researchers to investigate and expose the scourge of online child abuse.
Advertisers can put financial pressure to encourage sites to moderate and block illegal and abusive third-party content (as several companies have done following the latest failures on YouTube). But such boycotts rarely last. So if public pressure isn't enough then government regulation that forces companies to comply with their own terms of service and local laws may be necessary.
This might be difficult to police. It may have unintended consequences, such as making it more difficult for small companies to compete with the current giants of technology. Or it may encourage companies to overreact and become overly restrictive about permissible content. In which case, we would prefer that technology companies harness their enormous wealth and resources and simply do the right thing.
Explore further: Google moves to fix YouTube glitch exploited for child porn