Enlarge / Don't abandon the Internet yet!Aurich Lawson / Getty

Eric Goldman is a professor at Santa Clara University School of Law and co-director of the High-Tech Law Institute. Jess Miers is an Internet Law & Policy Foundry fellow and a second year Tech Edge J.D. student at Santa Clara University School of Law. The opinions expressed here do not necessarily represent those of Ars Technica.

For the first two decades of the commercial Internet, we celebrated the Internet as one of society's greatest inventions. After all, the Internet has led to truly remarkable outcomes: it has helped overthrow repressive political regimes, made economic markets more efficient, created safe spaces for otherwise marginalized communities to find their voices, and led to the most exquisite cat videos ever seen.

But in the last few years, public perceptions of the Internet have plummeted. We've lost trust in the Internet giants, who seem to have too much power and make missteps daily. We also are constantly reminded of all of the awful and antisocial ways that people interact with each other over the Internet. We are addicted to the Internet—but we don't really love it any more.

Many of us are baffled by the degradation of the Internet. We have the ingenuity to put men on the Moon (unfortunately, only men so far), so it defies logic that the most powerful companies on Earth can't fix this. With their wads of cash and their smart engineers, they should nerd harder.

So why does the Internet feel like it's getting worse, not better? And, more importantly, what do we do about it?

It was always thus

Let's start with the feeling that the Internet is getting worse. Perhaps this reflects an overly romantic view of the past. The Internet has always had low-value content. Remember the Hamster Dance or the Turkish "I Kiss You" memes?

More generally, though, this feeling reflects our overly romantic view of the offline world. People are awful to each other, both online and off. So the Internet is a mirror of our society, and as the Internet merges into everyday life, it will reflect the many ways that people are awful to each other. No amount of nerding harder will change this baseline of antisocial behavior.

Furthermore, the Internet reflects the full spectrum of human activity, from great to awful. With the Internet's proliferation—and with its lack of gatekeepers—we will inevitably see more content at the borders of propriety, or content that is OK with some audiences but not with others. We've also seen the rise of weaponized political content, including from state-sponsored entities, designed to propagandize or to pit communities against each other.

There is no magical way to eliminate problematic content or ensure it reaches only people who are OK with it. By definition, this content reflects edge cases where mistakes are most common, and it often requires external context to properly understand. That context won't be available to either the humans or the machines assessing its propriety. The result is those infamous content moderation blunders, such as Facebook's removal of the historic "Napalm Girl" photo or YouTube's misclassification of fighting robot videos as animal abuse. And even if the full amount of necessary context were available, both humans and machines are susceptible to biases that will make their decisions seem wrong to at least one audience segment.

There's a more fundamental reason why Internet companies can never successfully moderate content for a mass audience. Content moderation is a zero-sum game. With every content decision, the Internet companies make winners and losers. The winners get the results they wanted; the losers don't. Hence, there's no way to create win-win content-moderation decisions. Internet companies can—and are trying to—improve their content moderation efforts. But dissatisfaction with that process is inevitable regardless of how good a job the Internet companies do.

So given that Internet companies can never eliminate awful content, what should regulators do?

The downside of “getting tough”

One regulatory impulse is to crack down harder on Internet companies, forcing them to do more to clean up the Internet. Unfortunately, tougher laws are unlikely to achieve the desired outcomes for three reasons.

First, because of its zero-sum nature, it's impossible to make everyone happy with the content moderation process. Worse, if any law enables lawsuits over content moderation decisions, this virtually ensures that every decision will be "litigation bait."

Second, tougher laws tend to favor incumbents. Google and Facebook are OK with virtually any regulatory intervention because these companies mint money and can afford any compliance cost. But the companies that hope to dethrone Google and Facebook may not survive the regulatory gauntlet long enough to compete.

Third, some laws expect Internet companies to essentially eliminate antisocial behavior on their sites. Those laws ignore the baseline level of antisocial behavior in the offline world, which effectively makes Internet companies liable for the human condition.

The logical consequence of "tougher" Internet laws is clear but chilling. Google and Facebook will likely survive the regulatory onslaught, but few other user-generated content services will. Instead, if they are expected to achieve impossible outcomes, they will shut down all user-generated content.

In its place, some of those services will turn to professionally generated content, which has lower legal exposure and is less likely to contain antisocial material. These services will have to pay for professionally generated content, and ad revenue won't be sufficient to cover the licensing costs. As a result, these services will set up paywalls to charge users for access to their databases of professionally licensed content. We will shift from a world where virtually everyone has global publication reach to a world where most readers will pay for access to a much less diverse universe of content.

In other words, the Internet will resemble the cable industry circa the mid-1990s, where cable subscribers paid monthly subscription fees to access a large but limited universe of professionally produced content. All of the other benefits we currently associate with user-generated content will just be fond memories of Gen Xers and millennials.

The way forward is the way back

The irony is that we already have the regulatory solution that will lead to the optimal level of content moderation in our society.

In 1996, Congress enacted 47 USC 230 ("Section 230"). Section 230 says that websites aren't liable for third-party content, with limited exceptions that include intellectual property infringement and federal crimRead More – Source