Content and platform regulation: The German case and what’s to come in 2018

#NetzDG #socialmedia #hatespeech #fakenews #censorship #liability #enforcement #responsibility #territoriality #humanrights

© Cathleen Berger, San Francisco

In 2017, Germany made international headlines as a potential new “free speech sheriff” when the parliament passed the “Act to improve the enforcement of the law in social networks”, or in German “NetzDG”, on June 30. The law entered into force on October 1, 2017 and requires affected platforms to be compliant by January 1, 2018.

These are the main stipulations of the act:

  • Its objective is to combat illegal and harmful content on social media platforms.

The Federal Office of Justice, which is the agency in charge of issuing sanctions where applicable, has since specified the scope of the law by outlining different categories and maximum amounts of possible fines:

  • Category A: Any platform with more than 20 million users in Germany. Fines in this category range from 2.5 million to 40 million euros. Currently, this only applies to Facebook with roughly 31 million German users.

And now?

Following the German elections in September 2017, Germany’s leading parties still haven’t been able to form a new government. Whether or not discussions around reviewing and/or abolishing the NetzDG will be opened up again is therefore hard to tell. It is likely, though, that any German government will pursue this debate in international engagements like the Freedom Online Coalition, chaired by Germany in 2018, or the UN-mandated Internet Governance Forum, to be hosted in Berlin in 2019.

In the meantime, social networks are busy to figure out how to meet the compliance requirements. Facebook, for instance, claims to have a team of 40 people working on compliance, while Twitter started by rolling out a notification system that asks users to indicate which paragraphs of the NetzDG are being violated by the reported post.

Why should I care?

The German Act is not an isolated story. In 2017, we saw an unprecedented level of attention being paid to ‘hate speech’, online harassment, the use of platforms for radicalisation and the sharing of extremist content, the spread of ‘fake news’, misinformation, and propaganda. The U.S. discourse post-Trump, the ‘silence breakers’ and #metoo, election interference and micro-targeting, or the racist and misogynist troll armies that seem to be spreading everywhere — all these phenomena are interrelated, if distinct.

What is important is that all of them have raised questions around the role and responsibilities of platforms: Are they liable for user-generated content shared on their platforms? Where does the responsibility to protect users end and censorship begin? How can cultural biases be mitigated in a global network? And what exactly constitutes ‘hate speech’ or ‘fake news’?

For the longest time, these largely U.S. born platforms operated under the assumption that ‘bad speech’ could be fought by more, good speech. Now, companies like Twitter have updated their policies to more clearly address hateful content, arguing that “freedom of expression means little if voices are silenced because people are afraid to speak up.” This renunciation of the free speech doctrine opens up a new discourse around regulation and liability — and the actual enforcement power of governments. The debate about what is acceptable behaviour, who is welcome in which forum, and who doesn’t deserve a platform at all is shifting. Identifying a rights-respecting, yet nuanced approach to such broad societal problems won’t be easy — and yes, it may well open the doors for more authoritarian regimes to push their agendas, just like democratic governments are struggling to live up to their responsibilities trying to contain hate and violence.

And in this climate, Germany was simply one of the first to adopt legislation that regulates how platforms are to handle unlawful content on their sites. This did not go unnoticed in other parts of the world. One way or the other, the act has influenced regulatory efforts in Russia, Venezuela, Kenya, and the Philippines. Similar discussions are taking place in France and the UK. And it seems like Iraq may be next, as proposals around content and platform regulation have been put forward as part of a far-reaching cybercrime bill.

Moreover, discussions will continue in the EU, both with a view to illegal content on social media platforms as well as on ‘fake news’ and disinformation. And in June 2018, the UN Special Rapporteur for freedom of expression, David Kaye, will also present his report on content regulation in the digital age to the Human Rights Council.

We won’t be fixing societal problems on online platforms, but it’s got to be clear that the internet is not a rule-free space. Free speech must not silence the vulnerable and free speech must not be used as a veil to manipulate people with emotionally charged, false claims. Yet, free speech — and other human rights — must be protected. Among the important problems to be addressed in 2018, including the open questions around NetzDG, are:

  • Territoriality: How do we ensure that the internet as a global network and platforms as global communication hubs won’t be broken up and fragmented by (contradicting) national jurisdictions?

Further reading:

Strategy expert, focusing in intersection of technology, human rights, global governance, and sustianability

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store