
The auctioning of Muslim women first on Sulli Deals and now through Bulli Bai is shocking and it is our collective responsibility to make sure it never happens again. Finding those who created the app does not guarantee this. Creating a system to identify, report and block further iterations of this app does. It is worth remembering that even a good system of blocking this app from mainstream online platforms is a short-term technical solution. It may restrict the distribution of the app but will not address the serious society-wide problem that creates this unspeakable market for targeting Muslim women.
We have on our hands a problem of a few active bad actors, and many passive ones. The code was written and shared by people actively looking to target not only the women listed, but also Muslim women as a group. But others were involved. Everyone who saw it and did nothing. Everyone who takes such a dehumanised view of Muslim women that they see these auctions as a lark. If we are being honest, the passive bad actors also include their wider social circles — all the people who created this social norm of dehumanisation by enabling those who spoke of Muslim women in a derogatory fashion. We have currently fastened our attention on the active bad actors, and on the platform GitHub through which they share their repugnant tool.
So far, debates about harmful speech on online platforms have been about sharing some form of communication. For over a decade, Facebook, Twitter, WhatsApp and YouTube have been at the centre of controversy. Over time, they have developed elaborate systems through which harmful content can be identified and removed. This includes systems through which you and I can report speech as harmful, and systems through which law enforcement can order the removal of illegal content. There are also exceptional systems used for unambiguously and extremely harmful speech like child abuse media, that make sure that this content is proactively removed with as little delay as possible. What is key is that this content is usually legible to a wide range of people. In cases where the communication takes place in languages alien to the platform’s content moderators, they struggle. GitHub’s content is code. It is, however, working towards more sophisticated content moderation.
There is great value in a platform that shares code. If you do any kind of intellectual work, you’ll know the value of being able to draw on what others have learned and shared. Code, as we know, builds beautiful and terrible things. Malware is code. But the app on which I type this article and the app which plays me music as I write, are also code. The question is how to ensure that recurring versions of Sulli Deals and Bulli Bai are not shared on an otherwise useful platform. Part of the trouble is that you and I cannot read code any more than most people born and raised in the United States can read Bangla. GitHub is a specialist platform that is not accessible or legible to everyone.
Most suggestions thus far focus on finding and punishing the perpetrators. This is difficult because the United States of America’s laws require companies not to share private information unless the request is made through an onerous process. This is a pre-internet process for law enforcement requests from other countries. After the internet made American platforms intermediaries of communication worldwide, the number of requests for information from these companies escalated dramatically. The system does not have the resources to cope with the increased demand and there is a delay before requests can be processed. US-based companies cannot share information outside this system. This is why it is a waste of time calling for GitHub to hand over the names of the authors of the code. It is a path that the government can and should pursue but it is not the only remedy. Arresting those who authored and shared this code is not going to prevent others from creating more like it. If there is a market for this vile app and people whose social norms encourage its creation and use, it will materialise again.
Our focus in the short term should be on finding a way to make sure that any recurring versions of this code are blocked proactively by GitHub. This is the strategy used for non-consensual sexual media, or “revenge porn” as it commonly called: There is no shortage of people who create it and people who consume it; and great harm takes place when non-consensual sexual images are online in the period before they are reported and removed. To address this, platforms maintain a shared database of reported videos and images which they remove the instant they are re-published or shared. Bulli Bai is similar in the nature of harm, as well as the presence of a critical mass of bad actors. High-profile Muslim women’s photographs, however, are publicly available but made harmful by the context in which they are shared in these apps. At least in the short term, GitHub needs to work with the group being targeted towards an automated detection system that will restrain this new disturbing trend in targeting Muslim women.
This column first appeared in the print edition on. February 18, 2022 under the title ‘The code against harm’. The writer is an affiliate of the Berkman Klein Center of Internet & Society at Harvard University.