BERLIN — Sophie Passmann is an unlikely poster child for Germany’s new online hate speech laws.

The 24-year-old comedian from Cologne posted a satirical message on Twitter early on New Year’s Day, mocking the German far right’s fear that the hundreds of thousands of immigrants that have entered the country in recent years would endanger Germany’s culture. Instead of entertaining her more than 14,000 Twitter followers, Passmann’s tweet was blocked within nine hours by the American social media giant, telling users in Germany that Passmann’s message had run afoul of local laws.

Germany’s hate speech rules, known locally as NetzDG and which came into full force Monday, demand that social media giants promptly remove potentially illegal material, some of it within 24 hours of being notified, or face fines of up to €50 million. Enforcement of the rules has reignited debate about their practicality in an age when a tweet, Facebook post or YouTube video can spread virally around the globe within minutes. The law also highlights the problems that policymakers, in Berlin and elsewhere, now face when trying to police what can, and cannot, be posted online, as they try to balance people’s legitimate right to free speech with others’ desire to be protected against harmful material.

“I would consider it a huge coincidence if this didn’t have to do with the new law,” Passmann said of being blocked, adding that during the last couple of months, she has “tweeted things that were significantly more extreme” without being blocked.

Titanic, a German satirical magazine, was similarly barred after parodying anti-Muslim comments on its own Twitter account. At the other end of the political spectrum, Beatrix von Storch, a leading figure in the far-right Alternative for Germany party, was blocked on Twitter and Facebook after posting anti-immigrant messages. Twitter would not comment on either case, but said under certain circumstances, potentially harmful tweets may run against the company’s existing terms and conditions and not be linked to the new German hate speech rules.

German AfD member Beatrix von Storch, seen on December 3, 2017 in Hanover, was temporarily suspended from Twitter after she posted an anti-Muslim tweet | Philipp von Ditfurth/AFP via Getty Images

“It all comes down to conflicting rules with freedom of speech on one side, and human dignity on the other,” said Martin Drechsler, managing director of FSM, a nonprofit organization in Berlin that works with companies to tackle digital hate speech. “You have to balance those fundamental rights against each other.”

German coalition effect

The controversial hate speech rules have come into the spotlight as Germany’s ruling political parties — Chancellor Angela Merkel’s Christian Democrats, their Bavarian ally the Christian Social Union (CSU) and the Social Democrats (SPD) — are trying to patch together another “grand coalition” and put an end to the country’s unprecedented coalition deadlock, almost four months after a general election.

And although it was the political parties’ previous alliance that passed the hate speech law, the legislation will likely reemerge on the negotiation table during the upcoming talks.

“To a certain degree, the concerns that we had back when we were negotiating the law with the Social Democrats now prove to be justified,” said Thomas Jarzombek, a senior member of the German parliament for the CDU. “The core problem is that companies can play judges.”

Jarzombek said Germany should change the law to include more incentives for the tech companies to set up independent, third-party agencies to examine whether content should be blocked, instead of doing it themselves.

“But I don’t have the impression that [the SPD’s] Justice Minister Heiko Maas is willing to change anything about the law,” he said.

Maas — the country’s current federal justice minister, the law’s architect and its most prominent advocate — was quick to defend the rules in an interview with Bild newspaper Thursday, reiterating his position that the principle of freedom of opinion does not allow for spreading criminal content.

“Facebook, Twitter and Co. should have no interest in their platforms being misused for crimes,” he told the German publication.

The country’s demands that social media companies take greater responsibility for what is posted online follows a government report last year that shows Facebook managed to remove only 39 percent of illegal material within 24 hours of being notified by users. Twitter met the deadline in only 1 percent of cases, while Youtube, which removed 90 percent of flagged content within a day, was the only platform to meet the government’s targets.

Tech executives and lobbyists have repeatedly said Germany’s new hate speech rules have the potential to limit freedom of expression of the country’s citizens, and that it should not be left to private companies to determine what should be allowed online.

The hate speech rules “put companies under tremendous time pressure when examining reported content,” said Bernhard Rohleder, chief executive of Bitkom, a German trade body. “The high fines reinforce this pressure. This will inevitably lead to the deletion of permitted content.”

Watching worldwide

Germany’s battle over how to tackle online hate speech — rules that are arguably the strictest anywhere in the Western world — will likely have repercussions across Europe and farther afield.

“If the situation becomes unbearable, then we will act with legislation” — Vĕra Jourová

As politicians from Brussels to Washington try to figure out how best to respond to the rise of such digital content, Europe’s largest economy has become a pivotal test case for many other European countries, particularly France and Britain. These countries, whose lawmakers have lashed out against social media giants for not tackling such online material, are similarly mulling new online hate speech rules to force tech giants to police what is posted online, potentially making these companies judge, jury and executioner over the billions of online posts, messages and videos that are uploaded each day.

The European Commission also is mulling potential new pan-EU guidelines to expand on its existing voluntary code of conduct, to which almost all of the social media companies have signed up. In an interview last year, Vĕra Jourová, the Commission’s justice commissioner leading the fight against hate speech in Brussels, said large tech companies must deploy more resources to tackle the problem, or face potential new rules by the spring that may force them to comply with existing hate speech legislation, which was created for the offline world.

European Commissioner Vera Jourova | Emmanuel Dunand/AFP via Getty Images

“We want the rule of law to be applied in the internet sphere,” Jourová said. “If the situation becomes unbearable, then we will act with legislation.”

Industry watchers warn, though, that a move to force tech companies to decide what is posted online could give significant powers to a few private companies like Twitter and Facebook, who could be prompted to block more content than necessary as they face draconian fines.

“Both sides of this debate are hardening in their positions,” said Stefan Heumann, co-director of Stiftung Neue Verantwortung, a digital think tank in Berlin. “The tech community is rallying behind the platforms, but in the broader public opinion, many want to see this content taken down.”

Heumann said that these social media companies already have the right — written as part of their terms and conditions — to dictate what is published on their online networks, and these platforms should work more closely with German law enforcement agencies so that courts, not private entities, determine what should be considered hate speech.

“There are some parts of the law that are totally understandable, but my main concerns are about its implementation” — David Kaye

In the U.K., politicians have called on tech companies to take greater responsibility for potential hate speech and extremist material that is posted on their networks. These calls came after several platforms were found to have published content that ran afoul of existing national hate speech rules. In France, President Emmanuel Macron announced proposals earlier this week that would force social media companies to be more transparent about how so-called fake news, particularly paid for by state-backed actors, was circulated on their networks.

In response, tech companies say they already have taken several steps, including investing millions of euros in artificial intelligence to track potential illegal material online, to combat the spread of hate speech and extremist propaganda across their networks.

“There are some parts of the law that are totally understandable,” said David Kaye, the United Nations’ rapporteur on freedom of opinion and expression. “But my main concerns are about its implementation.”

Original Article

[contf] [contfnew]

Politico

[contfnewc] [contfnewc]