Twitter said today that it is cracking down on mean, hateful or menacing tweets that cross the red line from free speech into abuse.
Twitter is overhauling its safety policy and beefing up the team responsible for enforcing it, along with investing “heavily” in ways to detect and limit the reach of abusive content, general counsel Vijaya Gadde said in an column published by the Washington Post.
- Twitter likely to ban cryptocurrency ads in two weeks: Report
- Twitter bans fake celebrity porn content, accounts
- Twitter enforce new rules, suspends white nationalists
- Twitter aims to secure itself with new safety calendar
- Twitter rolls out ‘quality filter’ to keep out abuse, but only for some users
- Now receive Direct Messages from those who don’t follow you on Twitter
“We need to do a better job combating abuse without chilling or silencing speech,” Gadde said.
Twitter last month modified its rules to ban ‘revenge porn’ — the tweeting of intimate or revealing pictures or video of people without their permission.
The San Francisco-based micro-blogging site is also taking steps to curtail the use of anonymously created Twitter accounts to intimidate or silence targeted people.
“We are changing our approach to this problem, in some ways that won’t be readily apparent and in others that will be,” Gadde said.
Twitter has tripled the size of the team responsible for protecting users of the service, resulting in a five-fold increase in the speed of response to complaints, according to the general counsel.
“We are also overhauling our safety policies to give our teams a better framework from which to protect vulnerable users,” Gadde said.
Changes included expanding the definition of banned “abuse” to include indirect threats of violence.
“As some of our users have unfortunately experienced firsthand, certain types of abuse on our platform have gone unchecked because our policies and product have not appropriately recognized the scope and extent of harm inflicted by abusive behavior,” Gadde said.
“Even when we have recognized that harassment is taking place, our response times have been inexcusably slow and the substance of our responses too meager. This is, to put it mildly, not good enough.”
Facebook last month updated its “community standards” guidelines, giving users more clarity on acceptable posts relating to nudity, violence, hate speech and other contentious topics.
Facebook-owned smartphone photo and video sharing service Instagram followed suit today with a similar overhaul of its rules about what is deemed unacceptable.