Twitter has asked users to provide feedback on an upcoming policy, which aims at addressing the issue of using ‘dehumanizing language’ online, which comes within the ambit of abuse, but does not necessarily end up violating the rules and policies on Twitter.
The company has stated in a blog post that it will prefer to seek user feedback before it goes ahead with a new policy, that is drafted to address this issue. The company claims it has been working on this policy for the last three months.
As per this post, Del Harvey, VP, Trust and Safety, Twitter, has said that the service aims to address tweets that have been reported by users, but do not violate the platform’s rules and regulations. As per the terminology, ‘dehumaninsing language’ refers to posts that involves name calling on the basis of animals, or on body parts.
The blog post notes that examples of such language include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic). The feedback has been sought as Twitter ‘is unable to find these posts, that do not violate our terms, but are deemed objectionable by certain users’.
Our hateful conduct policy is expanding to address dehumanizing language and how it can lead to real-world harm. The Twitter Rules should be easier to understand so we’re trying something new and asking you to be part of the development process. Read more and submit feedback.
— Twitter Safety (@TwitterSafety) September 25, 2018
Specifically, the new policy, based on user feedback, will aim to target Twitter users who share content based on association with certain groups, that tend to promote violent behaviour through such language.
The platform has claimed that involving users in the feedback process will also help provide an understanding of the policy framework that is put in place regarding users and content.
The new survey will be available to Twitter users until 6 pm IST on October 9, after which the platform will review, before updating its policy at the end of the year.
According to Twitter, the company plans to expand its hateful conduct policy to include “content that dehumanises others based on their membership in an identifiable group, even when the material does not include a direct target.”
The definition of an identifiable group, according to the blog post is a “group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.”
Twitter has faced mounting criticism for how it handles abuse, and at times users have complained that hate speech does not get blocked because it does not violate the terms and service. It looks like the social network plans to expand its understanding of what constitutes as abuse on the platform.