Enlarge / Visualization of the spread through social media of an article falsely claiming 3 million illegal immigrants voted in the 2016 presidential election. Filippo Menczer

Shortly after the 2016 election, newly elected President Donald Trump—peeved at losing the popular vote to Democratic opponent Hillary Clinton—falsely claimed he would have won the popular vote if not for the supposed votes of 3 million illegal immigrants. The lie spread rapidly across social media—far faster than factual attempts to debunk it. And Twitter bots played a disproportionate role in spreading that false information.

That's according to a new study by researchers at Indiana University, published in Nature Communications. They examined 14 million messages shared on Twitter between May 2016 and May 2017, spanning the presidential primaries and Trump's inauguration. And they found it took just six percent of Twitter accounts identified as bots to spread 31 percent of what they term "low-credibility" information on the social network. The bots managed this feat in just two to 10 seconds, thanks in large part to automated amplification.

“People tend to put greater trust in messages that appear to originate from many people.”

Why are bots so effective at spreading false information? Study co-author Filippo Menczer attributes their success to so-called "social bias": the human tendency to pay more attention to things that seem to be popular. Bots can create the appearance of popularity or that a certain opinion is more widely held than it actually is. "People tend to put greater trust in messages that appear to originate from many people," said Menczer's co-author, Giovanni Luca Ciampaglia. "Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them."

Their findings are consistent with those of an earlier study, published by MIT researchers this past March in Science. Those researchers concluded that false stories travel "farther, faster, deeper, and more broadly than the truth in all categories of information." The MIT study was based on analysis of 126,000 stories tweeted by around 3 million people more than 4.5 million times, from 2007-2017. The result: a false story only needs roughly 10 hours to reach 1,500 users on Twitter, compared to 60 hours for a true story.

"No matter how you slice it, falsity wins out," said co-author Deb Roy, who runs MIT's Laboratory for Social Machines.

Roy and his colleagues also found that bots sped up the spread of both true and false news at equal rates. So he concluded that it's the human factor, more than bots, that is responsible for the spread of false news.

That's why the Indiana study emphasized the critical role played by so-called "influencers:" celebrities and others with large Twitter followings who can contribute to the spread of bad information via retweets—especially if the content reaffirms a target group's preexisting beliefs (confirmation bias). Menczer and his colleagues found evidence of a class of bots that deliberately targeted influential people on Twitter. Those people then "get the impression that a lot of people are talking about or sharing a particular article, and that may lower their guard and lead them to reshare or believe it," said Menczer. He calls it the "useful idiot" paradigm.

As a recognized "influencer," President Donald Trump's Twitter account is frequently targeted by bots spreading misinformation.
Enlarge / As a recognized "influencer," President Donald Trump's Twitter account is frequently targeted by bots spreading misinformation.Jaap Arriens/NurPhoto/Getty Images

In addition, another new study bolsters that finding. Researchers at the University of Southern California reviewed 4 million Twitter posts on Catalonia's referendum on independence from Spain. The researchers found that, far from being random, those bots actively targeted influential Twitter users with negative content to create social conflict. Those users often did not realize they were being targeted and hence retweeted and helped spread the misinformation. That paper recently appeared in the Proceedings of the National Academy of Sciences.

"This is so endemic in online social systems; no one can tell if they are being manipulated," said USC study co-author Emilio Ferrara. "Every user is exposed to this either directly or indirectly because bot-generated content is very pervasive." He thinks that solving this problem will require more than just technological solutions. "We need regulation, laws, and incentives that will force social media companies to regulate their platforms," he said. Twitter is already starting to vet new automated accounts to make it harder to create an army of automated bots, according to Menczer.

The potential downside of that is that bots are not necessarily a force for evil; bots can help amplify emergency alerts, for example. Like any technological tool, it depends on how one wields it. But maybe that would be an acceptable trade-off, given the damage such viral misinformation can inflict. Menczer et al. found that eliminating just 10 percent of the bot accounts on Twitter resulted in a significant drop in the number of news stories from low-credibility sources being shared.

This is the provocative question at the heart of the Indiana study. "Should we try to catch [viral misinformation] after the fact, or should we be in the business of [applying] a filter at the time that information is generated?" said Menczer. "Clearly there are pros and cons to making it harder for automated accounts to post information."

DOI: Nature Communications, 2018. 10.1038/s41467-018-06930-7 (About DOIs).

DOI: PNAS, 2018. 10.1073/pnas.1803470115 (About DOIs).

Original Article

[contf] [contfnew]

Ars Technica

[contfnewc] [contfnewc]