The term “white nationalism” is often a euphemism for white supremacist politics that dates back to the civil rights movement, when even white Americans turned in repugnance against violently extremist white people. And yet, even after 2017, when a racist bigot killed a woman at a rally against white supremacists in Charlottesville, Facebook’s training manuals drew a vague line between supremacism and its gentrified fictions. White is a hue, not a national identifier, and the idea of a white nation is illogical, but Facebook identified white nationalism with permissible ethnic pride. Following the mosque killings in New Zealand, however, Facebook has erased the distinction and will scrub its platforms of related hate speech next week.
But this can only be regarded as a letter of intent, because the social media giant has not yet issued fresh guidelines for screening content. That will be a difficult project, since social media are in a double bind. They have scarcely been committed foes of bigotry, or India would not have seen lynchings emboldened by WhatsApp or threats of physical harm on Twitter that are not taken down. At the same time, when platforms do take down content, they can be accused of censorship by interest groups. Complete transparency in the framing of guidelines and their enforcement would reduce this problem. Platforms need to take users on board, and not only experts on hate speech, such as the ones Facebook is currently consulting.
Besides, race can only be one facet of a general policy on hate, because context matters. In Europe, immigrants are perceived to be the problem. But the bigoted in Britain are worried about “Pakis”, a catchall term for South Asians which is agnostic to religion. In France, the concern is about clothing identified as Muslim. In the US, colour is the principal issue, even after half a century of state-mandated integration. And there are complications, as in New Zealand, where the Christchurch killings were inspired by US white supremacist traditions, but Muslim immigrants from Asia were targeted. Such improbable situations cannot be anticipated by a central censor based in Palo Alto. Technology leaders like Facebook could respond with technical solutions, using analytics and artificial intelligence to narrow the field. But strategies which do not involve the public, and do not respond immediately to local complaints, cannot contain the menace of hate speech online.