Illegal content and terrorist propaganda are still spreading rapidly online in the European Union — just not on mainstream platforms, new analysis shows.

Twitter, Google and Facebook all play by EU rules when it comes to illegal content, namely hate speech and terrorist propaganda, policing their sites voluntarily.

But with increased scrutiny on mainstream sites, alt-right and terrorist sympathizers are flocking to niche platforms where illegal content is shared freely, security experts and anti-extremism activists say.

Among the sites now favored for sharing illegal content there are Twitter clone, video-sharing site and message board (which has signed on to the European Commission’s voluntary content-policing program), most of which are known to feature neo-Nazi, anti-Semitic, sexist or ISIS-inspired terrorist content, according to a review conducted by the Counter Extremism Project.

“It’s like playing whack-a-mole” — Haras Rafiq, CEO of counter-extremism organization Quilliam

“The community has moved away from the mainstream platforms,” said Vincent Semestre, head of Europol’s Internet Referral Unit, which tracks terrorists’ online movements.

Posts seen by POLITICO on such sites feature swastikas or graphic photographs of ISIS executions, all of which clearly fall into the category of “illegal content” as defined by the European Commission.

But the Commission, which relies on voluntary cooperation from site managers to take down content, is largely powerless to force them into action. And the sites themselves only sometimes cooperate — highlighting the shortcomings of a system that relies on self-policing by responsible platforms.

“These companies are set up to not interact with policymakers and governments on this issue,” said Chloe Colliver, a project coordinator at ISD, a civil society organization fighting extremism.

Spike in hate speech

A review of illegal content on some niche sites reveals plenty of far-right meme material not linked to any particular news or current event.

But there is also more topical material, including hate-filled posts about recent riots among migrants in the northern French port town of Calais and posts glorifying Luca Traini, who has been accused of opening fire on a group of migrants in the Italian town of Macerata.

The shooting instantly became a hot button campaign issue ahead of Italy’s election on March 4, suggesting that some illegal content is being used for political ends.

At the same time, Polish anti-hate speech organization Hejtstop flagged a major spike in anti-Semitic content posted online in the days after the Polish government signed a controversial new Holocaust bill into law. The hate speech spread across all social media sites, including mainstream ones, the group reported.

Facebook, Twitter and other mainstream sites have policies in place to ensure that most illegal content is taken down within minutes or hours. Up to 99 percent of the al-Qaeda and ISIS terrorist content Facebook takes down is done through automation.

Google is planning to hire more content moderators this year to better monitor its sites, while more and more platforms are signing up to the EU’s voluntary mechanisms.

But experts who monitor hate speech say big social media firms have the resources and incentives to cooperate with EU authorities. Other sites do not, and they have become the go-to sites for hate speech.

“Many platforms, like … 4chan, Discord and, are not forthcoming with their approach to restrict and remove the spread of illegal content,” said David Ibsen, the head of the Counter Extremism Project.

The Commission is largely powerless to force smaller sites to take down illegal content | Kirill Kudryavtsev/AFP via Getty Images

For regulators attempting to stem the flow of online hate speech, part of the problem is that new sites and accounts keep cropping up. was created in 2016 as an alternative to Twitter for the alt-right, while researchers frequently discover new venues for hate speech.

“It’s like playing whack-a-mole,” said Haras Rafiq, the CEO of counter-extremism organization Quilliam.

Some activists argue that the only way to obtain better results is to legislate against illegal content online. But for now at least, the Commission — which published its third evaluation of its hate speech code of conduct in January — has no intention of writing laws on the subject.

“It leaves it to civil society organizations and to governments to think up more innovative ways to reach those audiences” — Chloe Colliver, project coordinator at ISD

“No one can force anyone to join [the Commission’s program against illegal content],” a Commission official said.

Asked to comment on the persistence of illegal content online, a spokesperson said: “The Commission agrees that we need to do more at European and global level to get illegal content off the web.”

Other activists argue that it’s time for regulators to focus more on prevention by engaging with people posting this content instead of placing all of the responsibility on platforms.

“It leaves it to civil society organizations and to governments, whether it be through funding or providing research and data, to think up more innovative ways to reach those audiences,” Colliver said.

Original Article

[contf] [contfnew]