Enlarge / British Prime Minister Theresa May.Jack Taylor/Getty Images

The British government is considering sweeping new laws to regulate problematic content online, ranging from terrorist propaganda to fake news. A new proposal unveiled on Monday would impose a new "duty of care" on websites hosting user-submitted content. Under the plan, a new UK agency would develop codes of practice outlining how sites should deal with various types of harmful content.

The new proposal follows last month's mass shooting in Christchurch, New Zealand, which left 50 people dead. In the wake of that attack, Australia passed a new law that requires major platforms to quickly remove violent online material—or face harsh fines and possibly even jail time. On Monday, a committee of the EU parliament backed a law that would fine online platforms up to 4 percent of their revenue if they failed to take down terrorist content within four hours.

Britain's proposal is much broader, requiring technology companies to police their platforms for a wide range of objectionable material. Companies could face fines if they don't remove harmful material quickly.

A 100-page white paper from Theresa May's government details the many categories of content that would be governed by the new rules, including child pornography, revenge pornography, cyberstalking, hate crimes, encouragement of suicide, sale of illegal goods, sexting by minors, and "disinformation." The proposal would also try to stop inmates from posting online content in violation of prison rules.

Such a sweeping proposal would be unlikely to pass muster in the United States, where the First Amendment sharply limits government regulation of online content. But America is unusual; most countries have a much narrower concept of free speech that leaves governments substantial latitude to regulate content they regard as harmful.

Still, a big question is how to crack down on harmful speech without unduly burdening the speech of legitimate users—or of unduly burdening the operators of smaller websites. Fundamentally, regulators have two options here. They can require online operators to take down content only after they've been notified of its existence, or they can require platforms to proactively monitor uploaded content.

Current law

Under the EU's E-Commerce Directive, current UK law shields online service providers from liability for content unless they have actual knowledge of its existence. But the UK government is now re-thinking that approach.

"The existing liability regime only forces companies to take action against illegal content once they have been notified of its existence," the white paper says. "We concluded that standalone changes to the liability regime would be insufficient."

Instead, the UK government says it's opting for a "more thorough approach," requiring technology companies to "ensure that they have effective and proportionate processes and governance in place to reduce the risk of illegal and harmful activity on their platforms."

Of course, forcing technology content to proactively monitor its platforms for objectionable content could create problems of their own, leading to unnecessary removal of legitimate content or eroding user privacy.

UK regulators say there's no need to worry about this. "The regulator will not compel companies to undertake general monitorRead More – Source

[contf] [contfnew]

Ars Technica

[contfnewc] [contfnewc]