Meta’s actions in May 2021 appear to have had an adverse human rights impact (in Palestine).” The report commissioned by Meta last year on how its policies harmed the rights of Palestinian Instagram and Facebook users during the attacks on Gaza in 2021 was damning. It also found “a lack of oversight at Meta that allowed content policy errors with significant consequences to occur”. In 2023, though, it seems that the bias in the algorithm persists – it’s now AI-powered.
An investigation by The Guardian has found that a new feature on WhatsApp – which, like Facebook and Instagram, is owned by Meta — that generates images in response to queries seems to promote anti-Palestinian bias, if not outright bigotry. Searches for “Palestinian” and “Palestinian boy” resulted in images of children holding guns. In contrast, searching for “Israeli boy” shows children playing sports or smiling, and even for “Israeli army” shows jolly, pious — and unarmed — people in uniform. The controversy around AI-generated stickers has not occurred in a vacuum. Meta’s social media platforms have been accused of being biased against content from and in support of Palestinians.
For some time now, considerable work has been done around bias in artificial intelligence and machine learning (ML) models. Since the programs are amoral, they can reflect — perhaps even enhance — the prejudices in the data used to train them. Addressing the prejudices in the machine, then, requires active interventions and even regulation. This is easier said than done — large AI and ML models can, at scale, deploy even the smallest bias on a massive canvas. Governments and regulators have their own political ends, and self-regulation by Big Tech remains something of a chimaera. The ethical thing to do, though, is clear. No search should paint people, especially childrefrom an entire community as inherently violent. For all it uses for human beings, AI should not be used to dehumanise so many of them.