Rice bunny, #metoo, and NetzDG or: are the days of social media numbered?

Would you be surprised if I told you #ricebunny and #metoo are the same thing? If so, I highly recommend listening to this Note To Self podcast that digs into the effects and manifestation of the me too movement in China — and that inspired me to write this post:

Harassment, hateful and violent speech, as well as their far-reaching, traumatising effects on people’s lives are part of our mainstream headlines these days. The majority of it is directed against women and underrepresented minorities. In a way, feminists will tell you that the inescapable onslaught of harassment on social media has merely raised awareness of the pervading our societies beyond its immediate targets. As sad as this truth is, it has helped spark and amplify conversations about allies and structural privilege in a broader way.

Any strive for more equal, just societies comes with the need for more respect to one another as human beings. It requires more understanding and also a stronger acknowledgment that one’s own rights find their limits when they interfere with someone else’s. This is true for freedom of speech as well. The protects freedom of speech, but it is not an absolute right, such as the right to dignity or, in fact, the right to equality (Art. 1 and Art. 2 UDHR; reinforced in the ).

Today, much of our communication, information dissemination, and discourse seems to happen on privately owned platforms. This contributes to the perceived power of social media platforms: they’ve come to provide critical functions and services for deliberation.

Yet, their business models build on harvesting, analysing, and selling our data to ever increase the ability to target individuals (for ad money), which at the same time has fueled the precision of both private and politically motivated actors to manipulate, to extort, and to incite others.

Surely, this puts an immense level of responsibility on privately owned businesses to manage speech on their platforms. Acknowledging their power for public discourse necessarily also puts governments on the scene whose mandate it is to protect their citizens from harm and ensure their rights are protected. What constitutes appropriate regulation, though, remains fiercely debated, notably with a view to public and private enforcement, freedom and censorship.

NetzDG, which is the German approach to combating hate speech (see my ), has been on these terms, too: it is said to transfer critical democratic safeguards to private actors and to incentivise overblocking. However, the number of complaints and subsequent content removals listed in the first round of published by , , , and appear lower than anticipated, with take-down rates ranging between 10.8% (Twitter) and 46.1% (Google+).

The law is by no means perfect and following an evaluation of its implementation and impact, we can expect a revision by 2020.

But that said, NetzDG may have also provided positive incentives. Platforms and government agencies established contact points that victims of harassment can turn to; both increased their staff numbers working on these issues (e.g. , 65 at Facebook, 100 at Google, 50 at Twitter), allowing for more deliberation as well as people who are in a position to assess content in its respective context. Moreover, it has heightened awareness of the severity of this problem, not least since online incitement to violence does indeed often translate to physical harm — against refugees, against minorities, against women.

If nothing else, given the widespread attention, it has helped put a pin in the role and value of social media platforms vis-a-vis public discourse: where harassment happens and where people can revert to to make their voices heard. This, in turn, allows us to talk about checks and balances and the necessary limits of power (and privilege).

What NetzDG doesn’t do, though, is address the question of automation or the fact that algorithms, fine-tuned to feed each of us more of what we “like”, also contribute to radicalisation, to misinformation, to bias, and to righteousness. We need yet more transparency, more insight, and more accountability if we want to a) tackle structural discrimination and b) reign in the negative consequences of the currently dominant, data-driven business model.

And frankly, in these scandal-ridden times of harsh , data breaches, privacy violations, manipulation, political overreach and more — part of me certainly wonders if the rationale for platforms to maintain their relevance as a forum for public discourse has ever been clearer.

If social media platforms truly want to bring people together across traditional borders, they need to rethink their decision-making procedures and open their platforms to localised, outside input. They need to transparently report on who sponsors which content and on what gets taken down why and how. They need to allow for research access from the outside to scrutinise their platforms for their rules of participation, to provide feedback on how to improve diversity and inclusion, for how to balance safe spaces for cross-cultural exchange while cracking down on those that seek to undermine and inflict harm. Put differently, they need to acknowledge their power by submitting to checks and balances. And just like it used to be perfectly normal to pay for newspapers, who says people won’t pay for services they value, trust, and that provide them with a meaningful level of access?

Ultimately, without a renewed emphasis on transparency, decentralisation, and participation — the days of (U.S. style) might well be numbered.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Cathleen Berger

Strategy expert, focusing in intersection of technology, human rights, global governance, and sustianability