Over the years, Twitter has often been called out by misinformation spread by its users, some of whom are politicians and public figures with a wide audience. Earlier this month, Twitter announced plans for dealing with deep fakes, or manipulated video.
Now, news has emerged that Twitter is testing out a new way to combat misinformation in tweets. This was first reported last week by NBC News, which attributed its information to a leaked demo of a new feature.
What is this new feature?
Tweets that have potential misinformation or lies will start showing an orange or red label with the tag ‘Harmfully Misleading’ beneath them, according to screenshots shared by NBC News. The message beneath the label reads, “Twitter Community reports have identified this tweet as violating the Community Policy on Harmfully Misleading information. The tweet’s visibility will be reduced.
How far can this stop the spread of misinformation?
The red and orange labels will be hard to miss. At the very least, these will warn users about misinformation. More importantly, Twitter will reduce the reach for the particular tweet, which means it will show up on fewer timelines and will have much less visibility.
Twitter will also highlight tweets that point out the false claims being made in the misleading tweet. Twitter is looking to encourage community members to write “Notes” and provide “critical context” to earn points. It will rely on community-based feedback system to remove misinformation.
This reliance on community reports can, however, be problematic. This opens up the possibility of ill-judged reporting, or individual or ideological biases coming into play. For instance, a historian could tweet out a view that might not be favourable with a certain section of people and it might get tagged as misleading, even if that were not really the case. At the moment, it is not clear how Twitter will assess these ‘community’ reports, given that the context will vary from one country to another.
Twitter will reportedly give out a “community badge” to those who “contribute in good faith and act like a good neighbour”. This again will raise questions on what exactly the criteria will be for determining who gets designated as “good neighbour”. Verified fact-checkers, journalists are expected to get preference when calling out tweets with misleading information.
What is Twitter’s plan for deep fakes?
Deep fakes, which Twitter calls “synthetic or manipulated media”, are videos on which complex AI tools are used to create a fake video or image or audio, which is difficult to identify as such. Deep fakes can be used to attribute false speech to a politician. They are also used to create pornographic videos.
Twitter revealed that where it fears an image has been “significantly and deceptively altered or fabricated”, it will provide additional context by applying a label. A tweet with a deep fake media would also appear with a warning before more users retweet or like the tweet. It also plans to reduce visibility for such tweets, and prevent it from being recommended.
If Twitter assesses the deep fake is likely to cause harm, it will remove the tweet. Such media would include threats to a person or group, mass violence, or impacting the privacy of a particular user. The labelling of tweets containing deep fakes will start on March 5.
📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines