In 2017, the way many people perceive social media and the larger discourse around it, took a decidedly negative turn. From being the Prometheus — the character from Greek mythology who stole fire from the gods and gave it to man — who connected us to each other like never before, the online networks metamorphosed into Pandora’s Box, now open and facilitating everything from the “rigging” of overseas elections to the amplification of abuse and misogyny. As 2017 ended, Facebook found itself under fire, and the ways in which the giants of social media handle these issues will determine the ways in which the world consumes and processes information in 2018.
Tackling ‘fake news’
Facebook and Twitter were accused in 2017 of allowing their platforms to be used to “influence” the outcome of the 2016 US presidential elections. Russia-based users bought ads and promoted “fake news” posts, often paying for them in roubles. Given their global operations, to what extent are social media giants even capable of regulating the content they host? Multinationals in general, including tech companies, have a fiduciary responsibility towards their shareholders — their duty is to turn a profit. When dealing with data emerging from billions of sources, how reasonable is it to assume that they will act in the interest of a particular democracy? Managing the tensions of this contradiction, and the need to balance the global character of the Internet with the laws and moral standards of particular countries, will be a major task for social media in 2018.
Giving ‘what people want’
How does Facebook know what you would like to read or share? The answer, to the layman, is as mysterious as the question: the algorithm knows all. Programmers use sophisticated formulae and artificial intelligence (AI) that “read” your preferences from your online activity and suggest what “you may like”, based on what you already like. The algorithm is now seen as part of the problem in a highly polarised world: rather than looking for facts and balance — the normative hallmarks of traditional media — those consuming content online will read and watch largely what reinforces the beliefs they already hold. “Fake news” and “post-truth” are in this category, where Facebook, Twitter and Google have been allegedly manipulated to circulate false stories that have contributed to triggering several political earthquakes. Expect to see more determined attempts at damage control in 2018 — advisories on how to spot fake news, encouragement to seek out diverse sources of information, etc.
Staying away from harm
In April, a Mumbai law student who was addicted to drugs streamed his death as a tutorial on “how to commit suicide”. In November, a 32-year-old telecom worker in Ludhiana did the same. Reacting to similar incidents across the world, a Facebook spokesperson said it was impossible to keep the live-streaming platform “completely free” of suicides. However, the company added over 7,000 employees last year to police content on the site, and to link those appearing to be in distress with people and institutions that were likely to be of help. It is likely that such incidents will continue in 2018; a greater emphasis on developing early identification and warning systems is equally likely.
While a case can be made for basic censorship — gratuitous sex and violence, advocating strife, etc. — on social media, the fact is the absence of gateways and barriers to entry is what lies at its very heart. The all-knowing Facebook algorithm flags “inappropriate” content, and users can complain against anything they feel is against the site’s “community standards”. But the inadequacies of this system became clear in 2017 as pictures of women breastfeeding their children were taken down, while racist comments, violent videos and threats remained online. A fundamental question for 2018 will be whether social media platforms should remain agnostic, or intervene in certain negative behaviours.
The privacy question
Doomsday theories have articulated apprehensions of not just an overarching state prying into individuals’ lives, but also of private tech companies (mis)using users’ information for profits. In 2017, multiple reports focussed on the “shadow profiles” that Facebook keeps of each of its users — a digital dossier built from information that users provide, and from the people they have interacted with online, so that the social media behemoth is able to better map them. A question for 2018 will be whether to leave user privacy and security in the hands of companies who are in the business of selling data.
The problems of social media ironically emerge from precisely the feature that makes it so appealing — unlike old media, there are no editors to decide what users should consume. Any attempt at regulation could, therefore, stifle the creativity, democracy, and profitability that the Internet provides. Efforts to reconcile these competing concerns will continue through 2018.