Opinion In Palestine-Israel conflict, guess which side Big Tech privileges?
Amid conflict, it is unlikely that self-interested technology platforms will radically alter power asymmetries, even if they are willing to try. Yet, greater scrutiny is required and they should be held to account
Buildings destroyed in the Israeli bombardment on al-Zahra, on the outskirts of Gaza City. (Photo: AP) Extraordinary events can offer deep insights into how certain systems and societies function. This is as true of the current conflict in the Gaza strip, as it was of the escalation of the Russian invasion of Ukraine in 2022 or Covid-19. In the context of the information ecosystem, they expose both longstanding and emerging frailties. Even in relatively stable times, a confluence of issues — market concentration with “winner takes most” dynamics, skewed incentives that often reward novelty, speed, conflict and “engagement” over accuracy and reason, low levels of information literacy, and a high prevalence of partisan biases in societies — mean that seeking/receiving information and infrastructures that provide it, seem to be in a state of dysfunction. Many of these apply not just to social media platforms and messaging applications, but to what we refer to as the traditional or mainstream media as well. They do not operate in isolation from each other, so it is important to recognise the challenges that both face, which are exacerbated in situations like the ongoing turmoil in West Asia.
Many of these are self-inflicted. In March 2023, Kate Klonick, a fellow at the Yale Information Society Project, described the shrinking budgets/appetite for content moderation work, and research at many social media platforms as the end of an era of accountability. Over the last decade or so, Twitter, imperfect as it always was, has served as an important information source for events unfolding in real time. Because of its willingness to provide information via APIs, it was also important from a research point of view. But now, under its new avatar “X”, it has dismantled many of the features that made it useful in such contexts. Its verification programme was not without its flaws but still signalled some degree of authenticity. This is significant as the 2021 and 2023 editions of the Digital News Report by the Reuters Institute suggest that people increasingly get their news/information from individual personalities. Twitter’s APIs that allow meaningful research are now behind a paywall that may cost as much as $40,000 a month, making them prohibitive for most researchers.
Still, researchers at the University of Washington’s Center for Informed Public (UWCIP) found that seven accounts, which they described as the “new elites” got over a cumulative 1.6 billion tweet views over three days of posts about the conflict. These accounts often used videos and images framed emotionally and received significantly more views than traditional media organisations which often had significantly higher numbers of followers. Many experts and journalists who have relied on the platform formerly known as Twitter, have expressed similar opinions. Such concerns have even prompted the European Commission to send its first “request for information” under the Digital Services Act.
While Meta has attempted to detail the steps it has taken through a blog post and insisted that it applies its policies equally, concerns persist. A particularly egregious “glitch”, a term that the company uses to explain away such instances without much follow-up information, resulted in translations of some combinations of the word “Palestinians” and an emoji into “Palestinian terrorists” on user profiles. Another “bug”, which affected all “Stories” globally, also impacted the ability to reshare posts about the situation in Gaza. Reportage from The Wall Street Journal suggests there is internal friction within the organisation over some of its enforcement decisions. More generally, there have been many instances of people around the world claiming that posts expressing solidarity with Palestinians are being “shadowbanned” (having the visibility of posts reduced) across Instagram, YouTube and TikTok.
Due to prioritisation and capability gaps, content moderation efforts generally struggle to deal with issues which require local context/nuance, and high-effort verification such as images/videos/information being shared out of context or in a false context. This is significantly harder in conflict and disaster scenarios, where there is both an absence of reliable information and a tendency among people to actively seek information, engaging in what scientist Kate Starbird refers to as “collective sense-making”. The steady improvement of generative AI models offers an additional resource to people or entities looking to exploit this gap and presents two challenges.
First, the ability and effort required to debunk them, and second, the possibility of genuine evidence being perpetually contested, even more than it already is.
At a time when multilateral institutions are unable or unwilling to speak unambiguously about the continued loss of civilian lives in a geopolitical conflict several decades in the making, and in general, threats to vulnerable populations across the world, it is unlikely that self-interested technology platforms will radically alter power asymmetries, even if they are willing to try. Yet, greater scrutiny is required for when they wittingly/unwittingly enforce such asymmetries. The operation of Internet Referral Units where platforms take action against posts flagged by a unit appointed or controlled by the executive, is one such example. Instances of operations/government relations in a country being overseen by individuals who have been closely associated with or have worked for politicians currently in government are another.
The combination of market concentration, skewed incentives, low information literacy and prevalence of partisan bias/social fissures in societies deeply affect social media platforms and the broader media landscape. These challenges cannot be wished away by “jawboning” with private entities about market access to a country or regional bloc. Nor by ham-fisted regulation that mainly serves to increase executive control and discretion. If regions around the world want to mitigate these effects in “war time”, we need “all time” investments into understanding the impact of information flows and addressing the underlying root causes of several complex issues. In addition, societies must find pathways not to actively reward their political class and its surrogates for polluting the information ecosystem.
The writer is Policy Director at the Internet Freedom Foundation