Stop outsourcing the regulation of hate speech to social media

  
Stop outsourcing the regulation of hate speech to social media
Regulating hate speech on the internet should not be left to private corporations. Credit: Shutterstock

When it comes to dealing with online hate speech, we've ended up in the worst of all possible worlds.

On the one hand, you have social media platforms like Facebook and Twitter that seem extremely reluctant to ban white supremacists and actual neo-Nazis, but enthusiastically enforce their own capricious terms of service to keep adults safe from such harmful things as the female nipple. That is, until something horrific happens, such as the Christchurch massacre, when they decide —after the fact —that some content needed to be banned.

This very much includes Facebook's decision this week to ban white nationalist content, a move that critics have demanded for years and that Facebook could have introduced at any time.

On the other hand, you have democratic governments (leaving authoritarian countries like China out of the mix) that have become far too comfortable exerting behind-the-scenes pressure on platforms to remove content or withdraw their services in the absence of legislation or formal legal orders.

Whether it's pressure on companies to withdraw web-hosting and payments services from Wikileaks following the leak of U.S. diplomatic cables in 2010, or the documented influence of the American government to pressure companies like Google into supposedly "industry-driven" trademark enforcement efforts, government regulation of speech is happening, but without any real accountability.

Debating the regulation of online speech

This reality is nowhere reflected in the debates over whether and/or how to regulate online speech. Instead of grappling with these basic facts, far too much of the debate over how to regulate social media is caught up in a U.S.-driven, libertarian-derived fever dream that sees all speech regulation as inherently problematic, cannot differentiate between liberal-democratic and totalitarian governments, and is obsessed with deploying technological tools to allow global platforms to deal with any problems.

In other words, governments are regulating speech, but not through democratic channels. Online platforms and internet service providers are regulating speech based on self-interested terms of service. That is, until the moment they decide to drop the banhammer.

The problem, to be crystal clear, is not that governments and these companies are regulating online content. All societies recognize that some kinds of speech are inherently destabilizing or harmful to individuals or specific communities. Child pornography is the most obvious example of this type of content.

Beyond such a straightforward example, different societies will draw different lines between acceptable and unacceptable content —think Germany's ban on public Holocaust denialism —but every society does have a line.

Australian Broadcasting Corporation: How the Christchurch terrorist used 8chan to connect and joke with neo-Nazis.
Ignoring the problem

Instead, the real problem —the one we ignore by focusing on the sideshow of whether speech should be regulated when it obviously always is —is about who should draw a line that is inherently subjective, and that changes over time and across societies. It is, in short, an issue of accountability: are we happy with American companies, or governments engaged in shadowy pressure games, making these decisions?

In order to deal with both of these problems —decisions made by unaccountable, profit-seeking global giants, and clandestine pressure tactics from supposedly democratic governments —we need to bring decisions about what content should be regulated and how these decisions are made into the public sphere.

We have to ensure that decisions about what speech gets regulated are made by the people affected by these rules. That's the whole point of democracy.

What this means is that in the absence of a global government, we need to think nationally, because that's where the accountability mechanisms are.

National regulation also respects the reality that countries have different social and political norms regarding speech. While the U.S. takes its free-speech absolutist stance from its First Amendment, for example, Germany recently instituted a law requiring social media platforms to remove hate speech or face steep fines.

Germany's law may be controversial. However, it's important to recognize that all efforts to regulate speech involve trade-offs, but that these efforts are designed to respond to a legitimate societal need. Plus, given the explosion of murderous real-world consequences associated with Facebook, for example, it's not clear that the American way is better.

Global connections

In an ideal world, connections would be global and through social media platforms, and they would operate in countries in which domestic law is the first and last word. For a model, consider Canada's banking system, embedded in a global financial system but subject to strict rules that spared the country the brunt of the 2008 global financial crisis.

Decisions about what speech would be regulated would be made out in the open, perhaps by an arm's length agency like the Bank of Canada.

Such proposals may be too much for those who see in government regulation the shadow of totalitarianism. We understand their concerns, but they need to recognize that we already live in a world of unaccountable government action when it comes to content.

Outsourcing our democratic self-government responsibilities to Mark Zuckerberg has had terrible —even genocidal —consequences. Content and speech are always being regulated —the only question is by whom, and in whose interests.

We believe that when it comes to our speech, citizens should be the ones to decide, with rules that are set transparently and with accountability.

Explore further: It's time for a new way to regulate social media platforms