Germany’s Parliament last week passed a landmark law to hold Internet companies accountable for illegal, racist, slanderous material on their social media platforms, requiring them to remove such content within a specified timeframe, or face fines up to 50 million euro. The law, perhaps the toughest of its kind in the democratic world, upended the philosophy underlying American legislation protecting tech firms from such responsibility. The companies and free speech advocates have expressed concern over the potential threat to legitimate freedom of expression; German Justice Minister Heiko Maas, the driving force behind the law, has, however, declared that “freedom of speech ends where the criminal law begins”.
What are the implications of such legislation in an interconnected digital world in which Facebook, Twitter, WhatsApp and Reddit are ubiquitous in nearly every country, each of which has its own set of laws and thresholds of social and legal acceptability in public conversation? Where does India stand on the regulation of such content at a time when its society in general, and social media in particular, is polarised on several issues?
What is the broad thrust of the law passed by the Bundestag on June 30 to regulate content on social media?
The Network Enforcement Act, commonly referred to in the media as the “Facebook Law”, requires social media companies, such as Facebook and Twitter, operating in Germany to delete or block any kind of hate speech, and racist or slanderous comments or posts that are “obviously illegal” within 24 hours of their being reported by users. In case of content that is flagged as offensive, but which may not amount clearly to defamation or incitement to violence, the companies have up to seven days to act. Persistent failure to delete illegal content will attract fines ranging from € 5 million to € 50 million.
Under the law, the social network has to inform the complainant how it handled the case; failure to do so could result in an additional fine of € 5 million on the company’s chief representative in Germany. Companies will have to file public reports every six months on the number of complaints received, and how they have been addressed. In a first, they will be required to also reveal the identity of the user accused of defamation or of violating other people’s privacy. So far, Facebook and Twitter have refrained from revealing user identity, even when asked to take down content in other countries.
Any social network with more than two million users will have to create a process for addressing complaints. While Facebook and Twitter already have a mechanism for reporting abuse, smaller, upcoming networks must quickly fall in line. Email and messenger providers, included in an earlier draft, have been excluded from the final law.
Companies have the option of review by an independent third party, a process that will be overseen by Germany’s federal Justice Department. The law comes into effect in October, after Germany’s national elections.
Why was such a law deemed necessary?
Even before the enactment of this law, Germany was among the most aggressive of Western democracies to force Facebook, Google and Twitter to clamp down on hate speech and extremist messaging. And yet, a study this year found that Facebook and Twitter had failed to meet a target set by the authorities of removing 70% of online hate speech within 24 hours of being alerted, The New York Times reported. Facebook was able to remove 39%, and Twitter met the deadline in just 1% of cases. YouTube removed 90% of flagged content within a day of being notified.
Chancellor Angela Merkel’s government has been worried about the explosion of racist abuse and anti-immigrant posts since the arrival, from 2015, of more than a million migrants, predominantly from war-torn Muslim countries. Nazi symbols and Holocaust denial are illegal in Germany, and the country has some of the Western world’s most stringent anti-hate speech laws.
Justice Minister Heiko Maas, who piloted the legislation, said its intention was to make the rules that apply in the real world, equally enforceable in the digital world. “With this law, we put an end to the verbal law of the jungle on the Internet and protect the freedom of expression for all. We are ensuring that everyone can express their opinion freely, without being insulted or threatened. That is not a limitation, but a prerequisite for freedom of expression,” Maas said.
“Freedom of opinion ends where criminal law begins… Calls to commit murder, threats, insults, incitement to hatred or the Auschwitz lie (that Nazi death camps didn’t exist) aren’t expressions of freedom of opinion but attacks on the freedom of opinion of others,” he said.
What has been the reaction to this law?
It has caused outrage among free speech advocates, and triggered considerable debate over what it means for freedom of expression on the Internet. Facebook has protested: “We have been working hard on this problem… We believe the best solutions will be found when government, civil society and industry work together…” Facebook, which has over 29 million users in Germany and had said in May that it would hire 3,000 more people to deal with abusive material, also complained that it was not consulted enough.
At the same time, the Central Council of Jews in Germany hailed the law as “the logical next step for effectively tackling hate speech since all voluntary agreements with the platform providers have been virtually unsuccessful”. Germany might set a precedent on tackling hate speech with the new law, keeping social networks under pressure that could extend even beyond German national borders. However, issues of definitions, and of where to draw the line, will continue to present complex problems — indeed, some of the tweets sent out by US President Donald Trump are, in the opinion of many, abusive and inflammatory.
How does the situation in Germany compare with that in India?
Abuse and hate speech are a problem everywhere, including in India. The now scrapped Section 66A of The Information Technology Act, 2000 made it a criminal offence to share content that was “grossly offensive or has menacing character” or information that was known to be “false”, but had been shared persistently to cause “annoyance, inconvenience, danger, obstruction, insult, injury, criminal intimidation, enmity, hatred or ill will”. A conviction was punishable with up to three years in jail and a fine.
Repeated misuse of the section to harass and intimidate, however, provoked the Supreme Court to declare, in March 2015, Section 66A to be unconstitutional. But even when Section 66A was in force, it was the individual who was under scrutiny, not the social network — as the law in Germany now does.
The Supreme Court had, however, upheld Section 69A, which gives law enforcement agencies “power to issue directions for blocking for public access of any information through any computer resource”. But the court “read down” the section — saying the intermediary would need a court/government order to pull down content. Both these sections were added to the IT Act in 2009 by the UPA government. Law enforcement agencies also invoke sections of the IPC to take down content.
Facebook’s data show it restricted 719 pieces of content in India from July-December 2016. Facebook says this content violated laws against “anti-religious speech, hate speech, and disrespect of national symbols”. Facebook has over 200 million users in India, making filtering of the kind demanded by the German law difficult. Also, as has been pointed out, many online abusers have social media links with those same lawmakers who would have to pass the anti-abuse law, should India choose to go down that road.
(With inputs from AP, Reuters, NYT)