Social media platforms to be more responsive on harassment, says report

A new report on online harassment calls for greater transparency and responsiveness in content moderation processes of social media

By: Tech Desk | Updated: November 23, 2016 1:44 pm
Internet, online harassment, social media moderation, transparency in social media, combating online harassment, social media censorship, online hate speech, legal protections against online harassment, steps to combat online harassment, social media, technology, technology news ‘Online Harassment: A Form of Censorship’ says efforts should be made to educate people about existing mechanisms for combating online harassment

A new report on online harassment has called for greater transparency and responsiveness in content moderation processes of social media platforms as well more capacity building for law enforcement agents to tackle this increasing virtual menace. The report, ‘Online Harassment: A Form of Censorship’ by Delhi-based not-for-profit legal services organisation SFLC.IN, says efforts should be made to educate people about existing mechanisms for combating online harassment.

Mishi Choudhary, Executive Director at SFLC.IN said they have been studying online harassment as form of censorship that forces people out of participation in online policy discourses. “This report’s goal is to explore the phenomenon in detail, document how people experience the effects of harassment in their lives as we work towards finding a workable and understandable ways to address the problem.”

This report, supported by the New York-based Software Freedom Law Center and think tank Jigsaw feature dialogues with key stakeholders, including social media platforms and 18 prominent individuals involved in the debate around online hate speech and harassment. One of them, MP Baijayant Panda, said there is a need for adequate legal protections against online harassment. “However, this should not be seen under any circumstance as an endorsement of draconian laws like the now-repealed Section 66A of the Information Technology Act, 2000 (IT Act), which lent itself to wanton abuse due to its over-broad and ambiguous language”, he added.

BJP IT cell head Arvind Gupta said online platforms at present suffer from lack of trust when it comes to guaranteeing users’ safety, and the significant levels of human intervention in content moderation further dilutes this trust as it involves personal biases. “Platforms need to build a system of trust and non partisanship by heeding user feedback and implementing broad-based changes to their content moderation practices on the basis of this feedback”, added Gupta, one of the interviewees.

The report suggests the following safeguards to social media users against online harassment and abuse:

Thoroughly screen the personal information shared online

Consider dedicating an email-ID for social media use

Avoid uploading photos that identify you along with your location to protect your identity

Use a pseudonym, if anonymity is relevant in your online activities

Keep a tab on information others post about you to ensure no personally identifiable information reaches unwanted hands

Run Internet searches on yourself to monitor unauthorized information appearing online

Use stronger passwords, and review your service providers’ privacy policies

It recommends the following steps to be taken in cases where users find themselves at the receiving end of targeted online harassment campaigns:

Report incidents to the concerned service providers

Block the perpetrators, when the perpetrators are limited in number

Approach law enforcement as a last resort, when there are real threats to physical safety

Seek help from social media influencers

Record all communications with perpetrators, service providers and law enforcement

Seek support from friends and family

It suggests that platforms like Facebook and Twitter stick to some best practices to limit online harassment:

1. Have in place rules that prohibit hateful, disparaging, and harassing content on intermediary networks; rules must be clearly articulated and designed for easy consumption; include illustrative examples for each category of prohibited content

2. Generate awareness within user community on prohibited content; notification systems, promotional banners etc. could be leveraged for the purpose

3. Enable easy and accurate reportage by users and third-parties; include easily identifiable “report” buttons; provide adequate opportunities to substantiate why content must be removed

4. Have clearly defined review processes prescribing (where possible) objective standards for determining permissibility; refer to applicable national laws

5. Deploy dedicated teams to review and disable content; provide periodic training to review teams on efficient identification and disablement

6. Review reports and disable content within a prescribed time frame (24/48/76 hours)

7. Provide opportunities to creators of disabled content to justify themselves; include 
provisions for timely restoration of disabled content and reinstation of terminated accounts

8. Share best practices within stakeholder community; contribute to building effective multi- stakeholder norms for tackling prohibited content

9. Liaise with law enforcement; aid in investigation of reported offenses in consonance with established legal procedures

10. Work with other stakeholder communities; engage with civil society organizations and academia on awareness generation; conduct trainings/workshops for law enforcement officials on reportage mechanisms so as to facilitate effective handling of complaints

11. Promote counter-speech; invite counter narratives from public figures; offer incentives; conceptualize additional means to promote counter-speech