As the United States heads into the week leading to November 3, social media’s role in shaping opinions and affecting political discourse has come to the forefront. On October 27, there were reports of Facebook approving numerous ads of Donald Trump in violation of its own pre-election policies. With an apparent objective to convince people of its commitment to election integrity, Facebook had made two announcements. In September, it announced that it would stop accepting new political ads after October 27. Then, in October, Facebook also announced that after the polls close, it would ban all political ads indefinitely. This was to prevent a campaign from claiming victory before the final results were tabulated and announced. However, on the first day of the moratorium, several ads appeared on the platform in violation if its own policies. Later, when flagged, many of these ads were taken down. There have been many who have claimed that Facebook has been soft on the Republicans.
Similarly, in the Senate Judiciary hearing held on October 28, Twitter was criticised for flagging posts by President Donald Trump while neglecting to do the same for posts by rulers of Iran and Venezuela. Both Facebook and Twitter have been flagged for censoring the New York Post story on Hunter Biden. Twitter has been criticised in many countries, including India for their biased and inconsistent role in flagging posts and blocking accounts.
The impact such social media platforms are having on influencing elections in the US are scary to say the least. In addition to social media posts, many voters have reported receiving text messages and emails that are nothing but disinformation campaigns about the presidential candidates and what they stand for. Many researchers feel that people are more ready to accept information that comes through their phone rather than social media. Number of such messages and emails have been observed to be significantly more in the swing states and designed with an objective to influence the undecided voters.
What does all this mean for democracies world over? How do nations regulate the content on social media? Given the reach of social media, even if content is blocked or removed, in most cases the damage would have been done. There have been many debates and suggestions for regulating content on platforms like Facebook, Twitter and Google to ensure that they are not biased and unfair. In the US, Section 230 of the Communications Decency Act protects the social media companies from liability over content posted by users but also allows them to shape political discourse. There have been demands to repeal Section 230 and make the platforms responsible for the content, since they do regulate content shared by users selectively and control what users see.
This brings to the classic debate of whether the social media platforms are actually platforms or publishers. Technically, a platform is a company or technology that enables communications and sharing of information amongst users. A telecom company is a typical example of a platform where whatever you share gets communicated as it is without the telecom company censoring or filtering any content. As against this, a publisher is an entity that curates, edits and then shares content. Newspapers and TV channels are typical examples of publishers and publishers are responsible for whatever is shared by them. Today, internet companies like Facebook and Twitter are not only moderating and editing content but also controlling how content is consumed – both Facebook and Twitter have algorithms which determine what shows up on our feed and timeline. Thus, they are increasingly becoming more of publishers than a platform. And once they are recognised as a publisher they will need to conform to laws and regulations that apply to publishers.
Facebook claims that it has community standards for regulating content in order to ensure authenticity, safety, privacy and dignity of individuals but the challenge has been that these community standards have often been critiqued for being biased in their enforcement. Twitter recently rolled out Twitter Fact Check with which it flags tweets that their checkers feel are in violation of their truth policy. Such tweets come with a line below that advises users to do a check on the post which Twitter feels could be a lie or harmful. This has also been observed to be applied selectively and many in the United States feel that Twitter is biased against the Republicans. Both Facebook and Twitter use AI tools to filter content and flag hate speech – but the inherent bias in those tools, whether deliberate or not raises many moral and legal questions.
While all businesses get impacted by this, the implications for democracies are much more. What’s happening in the United States today will happen in India tomorrow. India already has more than 350 million users on Facebook. With expansion of the Internet, and most of these platforms going vernacular, the impact will be huge. Guidelines and rules with regard to how countries like India can ensure that these platforms remain fair and unbiased need to be laid down and enforced.
The writer is CEO MyGov. Views are personal
📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines