Digital Politics is a column about the global intersection of technology and the world of politics.

Social media giants have come up with a plan in response to growing anger about how political ads spread like wildfire across the web: Theyre going to ban them.

That is what Twitter has in store after its global moratorium on such paid-for political messaging, which comes into force on Friday. Not to be outdone, Google also announced its own efforts, significantly reducing how these partisan ads could be targeted at would-be voters in changes that will apply globally from early 2020.

Facebook — by far the largest platform used by political actors to reach people from Berlin to Boston — has flagged its willingness to reduce the potency of such political ads. Yet so far, it has dragged its feet and angered many after Mark Zuckerberg, the companys boss, said the social network would not fact-check messages from political groups, even if they spouted incorrect information.

A blanket ban sounds like an easy solution to the growing problem of digital misinformation and highly partisan views that now litter everyones social media feeds.

All companies have taken steps to reduce the spread of misinformation (both political and non-political content), and the freedom for people to express themselves online should be rightly protected.

But dont be fooled. The efforts will do little, if anything, to stop the reams of (often false) political messaging from reaching voters.

More important, they dont deal with the underlying problem that now confronts Facebook, Google and Twitter: the need to take greater responsibility for what is published on their networks.

European officials are readying plans to overhaul the regions decades-old rules to force companies to be more liable for the material that hundreds of millions of people post on these sites each day. National lawmakers in Germany, France and the United Kingdom have similarly called on Big Tech to police online hate speech, disinformation and terrorist content.

And even U.S. lawmakers, until recently not the biggest fans of digital regulation, also have their eyes set on revamping existing legislation that currently gives tech companies a pass when it comes to taking responsibility for such online content.

Facebook CEO Mark Zuckerberg | Drew Angerer/Getty Images

That question — who should be liable for what is posted online, be it regular social media updates, political ads or, in the worst instances, digital hate speech? — has now taken center stage, and will likely vex the tech giants until they finally admit what has become blatantly obvious: that they are responsible for what their users post online.

Political ad ban

When Jack Dorsey, Twitters chief executive, announced the political ad ban last month, it seemed like a no-brainer.

The company is a relatively small player in the market (in terms of the overall money spent across social media on partisan paid-for messages). But any reduction in how these ads targeted voters online could potentially calm the divisiveness that has become central to 21st century elections.

But where Dorsey failed (ditto for Googles pared-down efforts, published on November 20) was its scope.

Political ads represent a mere rounding error (less than 5 percent of overall content, based on academic estimates) in the mountain of highly partisan material shared widely each day across social media, often by regular users posting their thoughts with others on Facebook pages, in WhatsApp groups or in YouTube videos.

None of the proposed bans will touch any of that material, unless it infringes existing online hate speech rules.

By acknowledging a willingness to moderate online content, Facebook, Google and Twitter have started a process that will only end in only way: taking more and more responsibility for what is shared on their networks.

That makes sense, particularly in the United States, where the First Amendment is sacrosanct. But it also leaves a massive hole in companies efforts to reduce how political messages reach would-be voters as almost all of content posted on social media comes from normal users, and not paid-for political ads.

A candidate, for instance, may no longer be able to target someone online with an ad with almost pinpoint accuracy. But nothing would stop that same candidate (potentially with a significant social media following) from posting updates which were then shared widely among potential supporters, even if those messages contained either false or misleading information.

All companies havRead More – Source