In November 2016, days after Donald Trump was elected US President, Facebook Founder and CEO Mark Zuckerberg rejected the idea that content on the company’s platforms influenced elections. “I think it’s a pretty crazy idea… Voters make decisions based on their lived experiences,” he said. On January 6, after the US Capitol was stormed by Trump supporters, Facebook, Twitter, Snapchat and Twitch suspended the president’s account to prevent him, ostensibly, from inciting more violence. The move is too little, and coming after a mob egged on by the US President laid siege to the Capitol, it is too late. In times when “lived experiences” are increasingly online, the social media giants have been much too slow in acknowledging their responsibility and addressing real-world consequences.
Trump has a long record of sharing falsehoods and bigotry on social media, beginning with his assertions that his predecessor Barack Obama was not born in the US, to sharing anti-Muslim and White supremacist posts and videos. As recently as the Black Lives Matter protests last year, he tweeted, “when looting starts, shooting starts”. Trump and his falsehoods, though, are only a part of a larger digital ecosystem that has devalued facts, created “communities” of conspiracy theorists and, at times, even contributed to violence. Long before the pro-Trump mob stormed the Capitol, WhatsApp and Facebook have been used to pedal falsehoods and incite mobs in countries like Burma and Sri Lanka. These outcomes are an unfortunate consequence of the algorithms and user experience and user interface that follow a “persuasive design” model. In essence, the algorithm is meant to keep users glued to the screen and get more of the same. For example, someone watching “flat earth” videos, or “the history of forced conversion in India” or, more recently, the bizarre conspiracy theories about a film star’s suicide, could well be led to something like a QAnon page (a group that believes Trump is saving the US government from a corporate “deep state”) or to fake news that whips up prejudice against minority groups.
While Big Tech has moved forward from complete denial of its culpability — under pressure from governments and the threat of external regulation — and instituted some measures for fact-checking and verifying sources, the fundamental structure of its platforms remains unaltered. Companies must confront the challenge and address it at the level of the technology, urgently. But the responsibility for a political discourse that respects facts and decency cannot just be on tech companies. Political parties, for instance, need to be held accountable for their leaders’ and members’ conduct. Just as tech companies must be called out for spreading falsehoods and creating mobs because it helps their bottom line, political parties must be held responsible if they do so in the name of winning elections and appealing to their base. Blocking a loudmouth or despot or leader of a mob is hardly the answer.