Content and platform regulation: The German case and what’s to come in 2018

Cathleen Berger
6 min readDec 30, 2017

#NetzDG #socialmedia #hatespeech #fakenews #censorship #liability #enforcement #responsibility #territoriality #humanrights

© Cathleen Berger, San Francisco

In 2017, Germany made international headlines as a potential new “free speech sheriff” when the parliament passed the “Act to improve the enforcement of the law in social networks”, or in German “NetzDG”, on June 30. The law entered into force on October 1, 2017 and requires affected platforms to be compliant by January 1, 2018.

These are the main stipulations of the act:

  • Its objective is to combat illegal and harmful content on social media platforms.
  • It only applies to platforms that a) have more than 2 million German users and b) serve the distribution and exchange of non-specified information in a public manner (Art. 1 I). In other words, the law is aimed at Facebook, YouTube and Twitter. Platforms such as LinkedIn or Xing are exempt since they serve to distribute specific content.
  • Platforms are required to block or delete unlawful content once it has been flagged, i.e. once they have been notified. Platforms do not have to proactively search for illegal content or install upload filters to preempt repeated offences.
  • In order to comply, platforms must take down content that is “manifestly unlawful” within 24h, otherwise “immediately”. “Manifestly unlawful” refers to content that fulfills the statutory definition of a criminal offence and does not require a detailed examination in order to recognise it as such. “Immediately” generally means within 7 days but may take longer if a) the platform needs to gather factual evidence or grants the user the opportunity to respond, or b) the decision is deferred to a regulated self-regulatory body (Art. 1 III).
  • Once content is blocked or taken down, platforms must retain the content for 10 weeks to allow law enforcement to pursue the case. Platforms are also required to appoint a German-speaking contact person (Art. 1 V).
  • If the platform receives more than 100 complaints about unlawful content per year, they must publish a transparency report, in German, every 6 months. First reports will be due by July 2018 (Art. 1 II+VI).
  • Interestingly, in case of non-compliance, the German enforcement agency must seek a preliminary court ruling to determine whether a flagged piece of content is unlawful before they can issue sanctions to the platforms (Art. 1 IV (5)).

The Federal Office of Justice, which is the agency in charge of issuing sanctions where applicable, has since specified the scope of the law by outlining different categories and maximum amounts of possible fines:

  • Category A: Any platform with more than 20 million users in Germany. Fines in this category range from 2.5 million to 40 million euros. Currently, this only applies to Facebook with roughly 31 million German users.
  • Category B: Platforms with a German user base between 4 and 20 million. Penalties in this category start with 1 million and go up to 25 million euros. This will probably only apply to YouTube and Instagram.
  • Category C: Platforms that count between 2 and 4 million German users. Sanctions in this category range between 250.000 and 15 million euros. According to current estimates this would include Twitter.
  • Individual employees can in severe and repeated cases be fined with up to 400.000 euros (contrary to Art. 1 IV (2) NetzDG that allowed for a maximum amount of 5 million euros).

And now?

Following the German elections in September 2017, Germany’s leading parties still haven’t been able to form a new government. Whether or not discussions around reviewing and/or abolishing the NetzDG will be opened up again is therefore hard to tell. It is likely, though, that any German government will pursue this debate in international engagements like the Freedom Online Coalition, chaired by Germany in 2018, or the UN-mandated Internet Governance Forum, to be hosted in Berlin in 2019.

In the meantime, social networks are busy to figure out how to meet the compliance requirements. Facebook, for instance, claims to have a team of 40 people working on compliance, while Twitter started by rolling out a notification system that asks users to indicate which paragraphs of the NetzDG are being violated by the reported post.

Why should I care?

The German Act is not an isolated story. In 2017, we saw an unprecedented level of attention being paid to ‘hate speech’, online harassment, the use of platforms for radicalisation and the sharing of extremist content, the spread of ‘fake news’, misinformation, and propaganda. The U.S. discourse post-Trump, the ‘silence breakers’ and #metoo, election interference and micro-targeting, or the racist and misogynist troll armies that seem to be spreading everywhere — all these phenomena are interrelated, if distinct.

What is important is that all of them have raised questions around the role and responsibilities of platforms: Are they liable for user-generated content shared on their platforms? Where does the responsibility to protect users end and censorship begin? How can cultural biases be mitigated in a global network? And what exactly constitutes ‘hate speech’ or ‘fake news’?

For the longest time, these largely U.S. born platforms operated under the assumption that ‘bad speech’ could be fought by more, good speech. Now, companies like Twitter have updated their policies to more clearly address hateful content, arguing that “freedom of expression means little if voices are silenced because people are afraid to speak up.” This renunciation of the free speech doctrine opens up a new discourse around regulation and liability — and the actual enforcement power of governments. The debate about what is acceptable behaviour, who is welcome in which forum, and who doesn’t deserve a platform at all is shifting. Identifying a rights-respecting, yet nuanced approach to such broad societal problems won’t be easy — and yes, it may well open the doors for more authoritarian regimes to push their agendas, just like democratic governments are struggling to live up to their responsibilities trying to contain hate and violence.

And in this climate, Germany was simply one of the first to adopt legislation that regulates how platforms are to handle unlawful content on their sites. This did not go unnoticed in other parts of the world. One way or the other, the act has influenced regulatory efforts in Russia, Venezuela, Kenya, and the Philippines. Similar discussions are taking place in France and the UK. And it seems like Iraq may be next, as proposals around content and platform regulation have been put forward as part of a far-reaching cybercrime bill.

Moreover, discussions will continue in the EU, both with a view to illegal content on social media platforms as well as on ‘fake news’ and disinformation. And in June 2018, the UN Special Rapporteur for freedom of expression, David Kaye, will also present his report on content regulation in the digital age to the Human Rights Council.

We won’t be fixing societal problems on online platforms, but it’s got to be clear that the internet is not a rule-free space. Free speech must not silence the vulnerable and free speech must not be used as a veil to manipulate people with emotionally charged, false claims. Yet, free speech — and other human rights — must be protected. Among the important problems to be addressed in 2018, including the open questions around NetzDG, are:

  • Territoriality: How do we ensure that the internet as a global network and platforms as global communication hubs won’t be broken up and fragmented by (contradicting) national jurisdictions?
  • Enforcement: Online life is offline life and any regulatory efforts should be directed at issues, not particular platforms, even if problems seem to be significantly amplified online. Moreover, we need to ensure that regulations only enhance existing responsibilities and do not outsource matters of law enforcement to private entities.
  • Safeguards: How can we best ensure that minorities and vulnerable groups will be protected without undermining other rights? Which procedures should be implemented to allow for appeals and remedies if content is mistakenly or disproportionately blocked or taken-down?
  • Automation: The questions around standardisation, algorithms, and machine learning to facilitate content moderation are too complex to simply be lumped in here — but they will be and must be at the forefront in the regulatory discussions to come.

Further reading:

--

--

Cathleen Berger

Strategy expert, focusing on the intersection of technology, human rights, global governance, and sustainability