Elon Musk-owned X, formerly known as Twitter, has the highest ratio of disinformation posts among the large social media platforms, according to a new European Commission (the European Union’s executive arm) study. It examined more than 6,000 unique social media posts across Facebook, Instagram, LinkedIn, TikTok, X, and YouTube in three countries, including Poland, Slovakia and Spain.
The study promoted EU’s Values and Transparency Commissioner Vera Jourova to urge Musk to comply with the bloc’s laws aimed at combating disinformation.
“My message for (X) is: you have to comply with the hard law. We’ll be watching what you’re doing,” she said.
This isn’t the first time that Musk has come under fire for failing to curb the increase in disinformation and hate speech on X. Several previous studies have found that since the billionaire’s takeover in October 2022, the platform has become less safe for its users. Why has this happened? What do these studies say? We explain.
The surge in disinformation and hate speech
In just 12 hours after Musk acquired X, there was a 500% increase in the use of the N-word, according to an analysis published by the Brookings Institution. Within the following week, the use of the word “Jew” had spiked fivefold in comparison to before.
“Tweets with the most engagement were overly antisemitic. Likewise, there has also been an uptick in misogynistic and transphobic language. This surge in hateful language has been accredited to various trolling campaigns on sites like 4chan and the pro-Trump forum “The Donald”,” the analysis added.
Things kept worsening in the subsequent months. In March, the Center for Countering Digital Hate (CCDH), a hate speech watchdog, released a study, finding that after the ownership transfer, the social media platform saw an uptick in hateful narratives, especially against the LGBTQ+ community, on the platform.
“This isn’t an accident. Elon Musk put up the ‘Bat Signal’ to homophobes, transphobes, racists and all manner of disinformation actors, encouraging them to flood onto Twitter. Not only has Musk’s ownership of the platform coincided with an explosion of the hateful ‘grooming’ narrative, but Twitter is monetising hate at an unprecedented rate,” Imran Ahmed, CEO of CCDH said.
About two months later, the organisation came out with another analysis in which it revealed that X failed to take action against 99% of hate posts by Twitter Blue subscribers, “suggesting that the platform is allowing them to break its rules with impunity and is even algorithmically boosting their toxic tweets.”
In response, X sued CCDH in August, claiming that the organisation’s findings have caused harm to the company’s business as it “encouraged advertisers to pause investment”.
What’s behind the jump in hate speech and disinformation
Soon after Musk took over X, he made sweeping changes within the company that ultimately caused the deterioration of safety standards on the platform.
For instance, the entrepreneur laid off about 50% of X employees, including several longtime executives. Notably, the long list had the name of the head of legal policy, trust, and safety, Vijaya Gadde. She had played a crucial role in banning former US President Donald Trump from the platform.
“As a result, the team that was previously in place to monitor and censure hate speech is no longer at Twitter,” the Brookings Institution analysis noted.
Moreover, Musk restored barred accounts, such as that of Trump and those that identified as part of the Islamic State, which also contributed to an increase in hate speech.
The move was followed by the dilution of the verification process. The owner offered the elusive blue Twitter tick mark for a small monthly fee and did away with the traditional merit-based process that seemed to reward users based on number of followers and prominence in a particular field.
The Brookings Institution noted that the new verification process miserably failed as users created fake accounts for companies and political leaders.