Follow Us:
Saturday, December 14, 2019

Can governments control social media? Or can users?

There is a need for cooperative responsibility in the realisation of public values in societal sectors, centered on online platforms.

Written by Rituparna Banerjee | Updated: November 19, 2019 5:12:09 pm
The very nature of social media intermediaries prevents any neat separation of best parts from their worst. (Image: Getty/Thinkstock)

In 1996, the cyberlibertarian activist, poet and essayist John Perry Barlow pronounced a Declaration of Independence of Cyberspace. He poignantly stated: ‘We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.’ Over two decades since, this seems naïve. Worldwide scandals such as Cambridge Analytica, Russia’s 2016 US election meddling, YouTube’s algorithmic propensity to serve up neo-Nazi propaganda and Twitter’s failure to police white supremacists, have progressively populated our news and conversations. In our own backyard, just recently, as the anti-Muslim #मुस्लिमो_का_संपूर्ण_बहिष्कार (total boycott of Muslims) continued to trend, Twitter’s silence was deafening. Virtual and real social spaces have tied themselves into knots of multiple and variegated levels. Let us not forget the power WhatsApp wields in channeling hate and fear mongering. In extreme cases, people have been killed by mobs as a result.

Online platforms, as defined by media studies scholars like José van Dijck and Thomas Poell, are socio-technical architectures to facilitate interaction and communication between users by collecting, processing, and circulating data. They make possible public activity outside the purview of government institutions, instrumentalising new terms or notions like ‘participatory culture’ and the ‘sharing’ or ‘collaborative’ economy. Many scholars have highlighted the power of social media in empowering individuals and societies to effectively assume roles as producers of public goods and services, as well as to act as autonomous and responsible citizens. In his book Social Media: A Critical Introduction, Christian Fuchs, however, excavates how in capitalist societies, the Internet is controlled by people who primarily aim to “monetise active users and commodify data”. A participatory democracy, he argues, can never be truly so.

The Indian government, meanwhile, fearing “unimaginable disruption to democratic polity”, aims for a new set of Internet regulations by January 2020. With internet service providers, search engines and social media platforms, guidelines are being framed. The Ministry of Electronics and Information Technology (MeitY), in its affidavit filed with the Supreme Court, stated that although “technology has led to economic growth and societal development, hate speech, fake news, public order, anti-national activities, defamatory postings, and other unlawful activities using internet/social media platforms” have exponentially been on the rise.

Of many demands, MeitY proposes legal amendments asking intermediaries to trace origins of ‘fake’ messages and locate them within 72 hours of any government agency requisitioning concerned information. Facebook and WhatsApp, with over 250 and 400 million active users each across India, are currently sparring with the Modi government over the irreconcilable dilemma of national security versus users’ privacy and freedom of speech. But, obviously, the Internet is not a purely national phenomenon. India is a reflection of what is already global unease. Legislations, policy briefs, debates and deliberations are underway across the world to devise the most effective model for online content management. The EU, for instance, addresses this through continent-wide measures like General Data Protection Regulation (GDPR), or regional ones attempting to regulate social media companies’ role in spreading harmful content, to the relatively stronger penalty statutes on actors who are not compliant.

However, empirical evidence is stacked against efficacy of such measures. The question must then lie somewhere in how civil society appropriates social media. Until very recently, the onus of safeguarding public values was on government institutions. However, economic liberalisation and privatisation of public institutions and services, combined with the advancement of digital technologies and dominance of intermediaries for general purposes like social communication (Facebook, Twitter, WhatsApp) to specific platforms in sectors like transportation and hospitality (Uber, Airbnb), demonstrates and, continues to foretell fundamental shifts. Service delivery aside, they transform peoples’ lives integrally. With these changes, the compositions of public values are altering not just individual self-interests, but also collective aspirations of societies.

The very nature of social media intermediaries prevents any neat separation of best parts from their worst. Although the whole world, including Facebook chief Mark Zuckerberg, agrees that there is a need for more government regulation of the Internet, no one knows how or to what extent. Major roadblocks exist in governments being able to safeguard democracy from social media rotting. Executive action still pending, however, contemporary scholarship has helped in bringing out some of the key obstacles to such action in the European context. Natali Helberger, Jo Pierson & Thomas Poell, for instance, discuss this in a 2017 article, when these concerns were on the rise. Such sustained and ongoing research is significant in, at least, providing valuable insights into the larger problem(s).

First, dominant online platforms are US-based transnational corporations. They take global architectural decisions, with the sole intention of commodification and datafication of people’s voices, which becomes the fodder for profits. Although these platforms pose as mere hosts or facilitators of circulated content, we need to be attentive to how they are vitally constitutive to generate public values. Their roles in constructing non-human infrastructures geared to enhance user engagement by spreading viral content, cannot be overlooked.

This brings out the second issue, the black-boxed nature of non-human architectures and underlying algorithms running them. From a user perspective, the selection process by algorithms occurs through techno-commercial strategies. Its opacity baffles experts struggling to successfully decipher why specific algorithms behave the way they do. This has prevented attempts to even identify or problematise, let alone solve, algorithmic bias. A seemingly simple solution would be complete transparency to ensure that the decisions being made can be independently evaluated. However, this is also untenable due to several social implications – the loss of privacy of information generators or owners and, the darker possibility of algorithms being manipulated by certain groups to their own advantage. It, further, negates salability of algorithms for the often-for-profit companies that develop them.

Third, the instrumentalities of actions and impacts between users and platforms is entangled. Not just platforms, but also active users on them play a role in constructing or eroding of public values. However, it is clear that the power between users and platforms is unequal, not least because of the platforms’ internal, and invisible, murkiness. The question of where the responsibility of the platform ends and that of user starts is a notoriously difficult one. Users themselves determine and influence what kind of content they upload, share and choose to be exposed to, even if only through their selection of friends or reading behavior, which morph into fodder for a platform’s algorithms. In other words, many problems with diversity or consumer protection on online platforms are, at least to an extent, user driven. For similar reasons, at least part of the remedy potentially lies with the users.

In conclusion, there is a need for cooperative responsibility in the realisation of public values in societal sectors, centered on online platforms. Governments alone can never come up with magic-bullet solutions. It is exigent to be conscious of social mores or responsibilities for the realisation of key public values, such as respect for diversity and civility, across stakeholders – platforms, governments and users. Shrill cries for transparency cannot even begin to dismantle such a complex issue. However, thinking about ways to best implement a culture of tolerance, transparency and accountability, offline – through modes like education and interpersonal civic orientation – could be a vital step in the right direction.

For all the latest Technology News, download Indian Express App

Advertisement
Advertisement
Advertisement
Advertisement