The bot, an app that performs automated tasks as if it were human, has widely been associated with the spread of fake news on social media. When humans too are known to spread fake news, which one plays what kind of role? A new study, published in Nature Communications, looks at the disproportionate contribution of bots and humans in the spread of misinformation on Twitter.
A key finding is that bots amplify low-credibility content in the early spreading moments — they heavily tweet links to such articles, before these go “viral”. Bots also target high-profile handles through replies and mentions. Then humans take over — vulnerable to this manipulation, they re-share content posted by bots.
Is this an accidental trend, or are bots programmed to behave that way? “Yes, I would say the bots are programmed to promote the spread of misinformation,” study co-author Filippo Menczer, a computer science and informatics professor at Indiana University, told The Indian Express by email. “The systematic, instantaneous retweeting by likely automated accounts cannot be by chance. I would guess that those bots are controlled by the same sources who post the misinformation articles. As soon as the article is published and posted, the bots retweet it.”
The findings emerge from a statistical analysis of 14 million tweets over 10 months in 2016-17. These tweets linked to over 4 lakh articles — 3.89 lakh articles from 120 sources known to publish low-credibility content (13 million tweets) and 15,000 articles from seven fact-checking sources (1 million tweets). The researchers used two tools they had developed. “Hoaxy” is a system that tracks the spread, on Twitter, of low-credibility as well as fact-checking articles. “Botometer” is a machine-learning tool to identify social bots.
Who tweets the most? The more a story was tweeted, the more the tweets were found to be concentrated in the hands of few accounts, who act as “super-spreaders”. These “super-spreaders”, a Botometer test indicated, were more likely to be bots compared to other accounts that shared low-credibility content.
Bots tend to get involved at particular times, trends for sharing showed. Once an article is published on Twitter, likely bots are more prevalent in the first few seconds than at later times. After that, humans do most of the retweeting.
Also, bots often mention influential users in tweets that link to low-credibility content — a single account mentioned @realDonaldTrump in 19 tweets, each linking to the same “false claim”, the researchers note. A possible explanation for this strategy is that bots (or their operators) want to create the appearance that the low-credibility content is widely shared. The hope behind these shares, the study suggests, is that these influential users will then reshare the content to their followers, thus boosting its credibility.
In a previous study (bit.ly/2zqyu7k), researchers with MIT had found that while false news is likely to spread faster than true news, false news travelled faster and wider without bots. “False news spreads farther, faster, deeper, and more broadly than the truth because humans, not robots, are more likely to spread it,” noted that paper, published in March. The new findings, according to the authors, complement the earlier ones. “[The previous study] claimed that bots alone cannot explain the virality of false news. In fact, we also found that most of the spreading is done by humans. However, bots play a critical role in exposing the misinformation to humans and inducing them to reshare it. Therefore, they work as amplifiers,” Menczer said.
The tweets analysed were posted in the period between mid-May 2016 and end-March 2017, during and following the US presidential campaign. “Also, we developed a tool called Bot Electioneering Volume that shows how actively bots post about the US midterm elections and what/who they tend to promote,” Menczer said.
Is it likely that the same trends would hold for tweets and bots in any country at any time. Menczer said, “It is possible. We get reports from people in other countries using our tools and claiming that they observe similar activity.”