Updated: February 23, 2020 2:27:45 pm
Twitter is testing out a new way to combat misinformation on the platform. It will start flagging suspect tweets with a coloured label warning saying they contain misinformation or lies. The feature was first reported by NBC News and is reportedly still in testing. Twitter has not officially confirmed when the labels will start rolling.
We explain why Twitter is taking such steps.
What is this new feature that Twitter plans to roll out?
Misinformation is a serious problem on Twitter and one for which the company has often been called out. It appears the platform will finally start calling out lies and misinformation for what it is, and in effect directly warn users.
Tweets which have potential misinformation or lies will start showing an orange or red coloured label with the ‘Harmfully Misleading’ tag beneath them, according to the screenshots shared by NBC News. The message beneath the label reads, “Twitter Community reports have identified this tweet as violating the Community Policy on Harmfully Misleading information. The tweet’s visibility will be reduced.”
It looks like Twitter will apply this to tweets by politicians and public figures whose accounts post misinformation or lies, which has been a problem on the platform.
The leaked demo features bright red and orange badges for tweets that have been deemed “harmfully misleading,” in nearly the same size as the tweet itself and prominently displayed directly below the tweet that contains the harmful misinformation.https://t.co/TciYv430l6 pic.twitter.com/xafDO29e8M
— NBC News (@NBCNews) February 20, 2020
So how will this stop the spread of misinformation?
As the screenshots note, it will at the very least tag a tweet that has misinformation or some form of lies and warn users. The red and orange labels will be hard to ignore on the platform. Twitter will also highlight tweets that points out the false claims being made in the misleading tweet.
More importantly, Twitter will reduce the reach for the particular tweet, which means it will show up on fewer timelines and will have much less visibility. Twitter is looking to encourage community members to write “Notes” and provide “critical context” to earn points. It will rely on community-based feedback system to remove misinformation on the platform.
📢 Express Explained is now on Telegram. Click here to join our channel (@ieexplained) and stay updated with the latest
Is this really the best solution?
Obviously, the solution proposed is not perfect. For one, Twitter is relying on ‘Community Reports’ to highlight that a tweet has misinformation. This can be problematic because Twitter will be dependent on the wisdom of the crowds in some sense to ensure that misinformation is tagged. And there is plenty of evidence to show that on Twitter, trends can be gamed and ‘wisdom’ is lacking.
Twitter will supposedly give out a “community badge” to those who “contribute in good faith and act like a good neighbour,” which will again raise some questions on exactly what will be the criteria for determining who gets designated as ‘good neighbour’. Verified fact-checkers, journalists are expected to get preference when showcasing other tweets, which call out the misleading information.
However, there are individual and ideological biases that exist and these could very clearly influence how a tweet gets tagged as ‘misleading.’ For instance, a historian could tweet out a view that might not be favourable with a certain section of people and it might get tagged as misleading, even if that’s not the case.
At the moment, it is not clear how Twitter will assess and handle these ‘community’ reports, given the context will vary across each country. Hopefully, the company will reveal some clear guidelines when it releases the feature officially.
What about deep fakes on Twitter?
Earlier this month, Twitter announced plans to reduce the reach of deep fakes by labeling them as such and reducing their reach. Twitter refers to these are “synthetic or manipulated media”. Deep Fakes are videos where complex AI tools are used to create a video or image or audio, which is fake, but it is not as easy to spot as some of the lower quality ones.
Deep fakes are much more sophisticated and can be used to attribute false speech to a politician. They are also used to create pornographic videos.
Twitter had revealed that where it fears an image has been “significantly and deceptively altered or fabricated”, it will provide additional context by applying a label. A tweet with a deep fake media would also appear with warning before more users retweet or like the tweet. It also plans to reduce visibility for such tweets, and prevent it from being recommended.
Twitter would keep three factors in mind when scanning and labeling a tweet as a deep fake. First, it would look at “whether the content has been substantially edited,” including the addition or removal of any kind of visual or audio information. Second, it would also consider the context in which was being shared and whether there was “a deliberate intent to deceive people”.
Don’t miss from Explained: Who was Larry Tesler, the inventor of the cut, copy, paste command?
Finally, the platform will remove the tweet, if it is likely to cause harm. This would include threats to a person or group, mass violence, or impacting the privacy of a particular user. The labeling of tweets will start from March 5 this year.
📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines