Follow Us:
Sunday, June 20, 2021

How algorithm fuels misinformation mills around coronavirus

People either dismiss the crisis or avoid scientific measures of control and containment as they put their trust in fake news. A public caution and literacy about how to trust information online are needed.

Written by Nishant Shah |
Updated: March 6, 2020 6:28:30 pm
 Coronavirus, Covid-19, fake news, misinformation, hysteria, China, South Korea, algorithms, DNA mutation, information sharing, social media, artificial intelligence, information circulation Pandemonium: Social media is abuzz with misinformation about the outbreak. (Express Photo by Anoop K Venu)

In the age of digital networks, every time a crisis appears, it is accompanied by misinformation. The novel coronavirus Covid-19 had just begun to assert itself as a global epidemic when the first churnings of the misinformation mill started. There was a range of alarmist, panic-inducing messages, posts, and forwards that immediately hatched conspiracy theories about the origin, the spread, and the rate of contagion, introducing doubt and mistrust in the officials working at containing the outbreaks. This was followed by “secret footage” that claimed exaggerated death numbers, suspicious “leaks”, and even deniers who insisted that this was just a hoax with covert political agenda.

This early spate of disinformation, laced with fear, and aimed at accelerating mass-hysteria, are perhaps easy to dismiss. Largely, because the virus has garnered so much global attention and coverage that a calmer, more poised, and rational discourse has more or less dominated the conversation, and even the most alarmed are still learning not to over-react: the news of people stock-piling food and discriminating against persons of Asian descent notwithstanding.

It is easy, for most of us who are used to these cycles of alarm, to recognise them as malicious communiques that can be easily fact-checked and attributed to bots and trolls who revel in creating public panic. However, the more difficult misinformation comes in the guise of miracles, discoveries, breakthroughs, and the triumph of human ingenuity over biological mortality. The messages that recommend self-diagnosis by breathing deep, or the forward that claims eating boiled garlic cures the virus, or the post that uses scientific-sounding language to suggest these viruses only survive in dry environments and hence keeping your mouth continually moist will save us from infection — the sheer variety and vividness of these messages fill me with wonder. Ever since the news of the community spread at a global level have started showing up, I have been following an almost endless scroll of misinformation that recommends everything from bleach and cow-urine to religious conversions and DNA mutation as plausible answers to escape this imminent threat.

It is baffling why these responses emerge. The easy answer is to just say that people are ignorant and gullible, that they were always these incredulous and rumourmongers and social media is just making it visible. To some extent, it might be true. The flattened surfaces of the digital and the reliance on the “wisdom of crowds” does mean that we are actually just steeped in idiocy and stupidity. It might also be useful to realise though that these responses that emerge out of desperate hope and optimistic belief in human capacity to survive are also augmented and amplified by the algorithmic structures that favour matrices of traffic and engagement over verification and veracity.

Algorithms of information sorting and curating add to the misinformation milieu in two distinct ways. The first is through information favouring. We often presume that algorithms are neutral or objective. They also present themselves as mere vehicles of circulation. But we know from numerous studies in structural biases and amplified discrimination built into our networks that algorithms favour certain kinds of information. The data-points for which information is favoured over another are opaque. Different platforms have different intentions. However, they all prefer traffic. Information that is shared is information that is valuable. Algorithms don’t only analyse sharing patterns of users but also manipulate our attention to make things more shareable. So, as misinformation comes in, remember that algorithmic contexts are pushing this to us.

The second is information personalisation. Algorithms of customisation that surround us on all our devices and apps are building a world of information that is specifically curated by an artificial intelligence corpus, just for us. While there can be advantages to this personalisation, it does come with the burden of the algorithm predicting what you are going to like. Your own self, usage, histories, and preferences become coded and the information you are being shown is information that might no longer be true but will be deeply resonant with how you view the world and what you would like to hear. Algorithms target us as profiled objects, and push us into sharing information because it is tailored to fit into how we view the world.

There is much anxiety at how misinformation sharing cultures are leading to dangerous results where people either dismiss the crisis or avoid scientific measures of control and containment as they put their trust in fake news. A public caution and literacy about how to trust information online is needed. Simultaneously, we also need an audit of the algorithms that fuel these misinformation mills and manipulate users into sharing what is untrue.

Nishant Shah is a professor of new media and the co-founder of The Centre for Internet & Society, Bengaluru

📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Eye News, download Indian Express App.

  • The Indian Express website has been rated GREEN for its credibility and trustworthiness by Newsguard, a global service that rates news sources for their journalistic standards.