Written by Damien Cave
The video showing the murder of 51 people in Christchurch carries both an offensive title, “New Zealand Video Game,” and a message to “download and save.”
Appearing on 153news.net, an obscure site awash in conspiracy theories, it is exactly the sort of online content that Australia’s new law criminalizing “abhorrent violent material” says must be purged. But that doesn’t mean it’s been easy to get it off the internet.
“Christchurch is a hoax,” the site’s owners replied after investigators emailed them in May. Eventually, they agreed to block access to the entire site but only in Australia.
A defiant response, a partial victory: Such is the challenge of trying to create a safer internet, link by link.
In an era when mass shootings are livestreamed, denied by online conspiracy theorists and encouraged by racist manifestoes posted to internet message boards, much of the world is grasping for ways to stem the loathsome tide.
Australia, spurred to act in April after one of its citizens was charged in the Christchurch attacks, has gone further than almost any other country.
The government is now using the threat of fines and jail time to pressure platforms like Facebook to be more responsible, and it is moving to identify and block entire websites that hold even a single piece of illegal content.
“We are doing everything we can to deny terrorists the opportunity to glorify their crimes,” Prime Minister Scott Morrison said at the recent Group of 7 summit in France.
But will it be enough? The video of the Christchurch attack highlights the immensity of the challenge.
Hundreds of versions of footage filmed by the gunman spread online soon after the March 15 attack, and even now, clips, stills and the full livestream can be found on scores of websites and some of the major internet platforms.
The video from 153news alone has reached more than 6 million people on social media.
Australia is pitching its strategy as a model for dealing with the problem, but the limits to its approach have quickly become clear.
Although penalties are severe, enforcement is largely passive and reactive, relying on complaints from internet users, which so far have been just a trickle. Resources are scarce. And experts in online expression say the law lacks the transparency that they say must accompany any effort to restrict expression online.
Of the 30 or so complaints investigators have received so far that were tied to violent crime, terrorism or torture, investigators said, only five have led to notices against site owners and hosts.
“The Australian government wanted to send a message to the social media companies, but also to the public, that it was doing something,” said Evelyn Douek, an Australian doctoral candidate at Harvard Law School who studies online speech regulation. “The point wasn’t so much how the law would work in practice. They didn’t think that through.”
A Hierarchy of Harmful Content
The heart of Australia’s effort sits in an office near Sydney’s harbor that houses the eSafety Commission, led by Julie Inman Grant, an exuberant American with tech industry experience who describes her mission as online consumer protection.
Before the law passed, the commission handled complaints about other online harms, from cyberbullying to child sexual exploitation. But while the commission’s mandate has grown, its capacity has not. It has just 50 full-time employees and a budget of $17 million for this fiscal year.
Lawmakers have said they will consider increasing resources, but at the moment, the team enforcing the law consists of only seven investigators.
Inside a room with frosted windows and a foosball table, the team reviews complaints. Most of the flagged content is relatively benign — violence from war or what investigators describe as versions of a naked toddler being bitten in the groin by a chicken.
“There are a lot of things we can’t do anything about,” said Melissa Hickson, a senior investigator.
Experts say that is the problem with relying on complaints, which is what social media platforms like Facebook and Twitter do as well. Enforcement can be haphazard.
A better model, some argue, is evolving in France, where officials have said they want to force internet services to design risk-reduction systems, with auditors making sure they work. It’s similar to how banks are regulated.
Australia’s new law takes an approach more in line with the way the world fights child pornography, with harsh penalties and investigations led by the same team that handles images of child sexual exploitation.
Worldwide, after decades of evolution, that system is robust. Software called PhotoDNA and an Interpol database rapidly identify illegal images. Takedown notices can be deployed through the INHOPE network — a collaboration of nonprofits and law enforcement agencies in 41 countries, including the United States.
In the last fiscal year, the Cyber Report team requested the removal of 35,000 images and videos through INHOPE, and in most cases, takedowns occurred within 72 hours.
“I think we can learn a lot from that,” said Toby Dagg, 43, a former New South Wales detective who oversees the team.
Experts agree, with caveats. Child exploitation is a consensus target, they note. There is far less agreement about what crosses the line when violence and politics are fused. Critics of the Australia law say it gives internet companies too much power over choosing what content should be taken down, without having to disclose their decisions.
They argue that the law creates incentives for platforms and hosting services to preemptively censor material because they face steep penalties for all “abhorrent violent material” they host, even if they were unaware of it, and even if they take down the version identified in a complaint but other iterations remain.
Dagg acknowledged the challenge. He emphasized that the new law criminalizes only violent video or audio that is produced by perpetrators or accomplices.
But there are still tough questions. Does video of a beheading by uniformed officers become illegal when it moves from the YouTube channel of a human-rights activist to a website dedicated to gore?
“Context matters,” Dagg said. “No one is pretending it’s not extremely complicated.”
Calls for Transparency and Collaboration
Immediately after the Christchurch shootings, internet service providers in Australia and New Zealand voluntarily blocked more than 40 websites — including hate hothouses like 4chan — that had hosted video of the attacks or a manifesto attributed to the gunman.
In New Zealand, where Prime Minister Jacinda Ardern is leading an international effort to combat internet hate, the sites gradually returned. But in Australia, the sites have stayed down.
Morrison, at the G-7, said the eSafety Commission was now empowered to tell internet service providers when to block entire sites at the domain level.
In its first act with such powers, the commission announced Monday that around 35 sites had been cleared for revival, while eight unidentified repeat offenders would continue to be inaccessible in Australia.
In a country without a First Amendment and with a deep culture of secrecy in government, there is no public list of sites that were blocked, no explanations and no publicly available descriptions of what is being removed under the abhorrent-content law.
More transparency has been promised by officials in a recent report, and some social media companies have pledged to be more forthcoming. But Susan Benesch, a Harvard professor who studies violent rhetoric, said any effort that limits speech must require clear and regular disclosure “to provoke public debate about where the line should be.”
To get a sense of how specific complaints are handled, in early August a reporter for The New York Times submitted three links for investigation:
— A Facebook post showing a gun used in the Christchurch attacks.
— Footage of the Christchurch attacks found on a site based in Colombia.
— A message board post referring to the alleged Christchurch attacker as a saint.
Investigators said the last item “did not meet the threshold” and was not investigated. For the Christchurch footage, a notice was sent to the site and the hosting service. The first complaint was referred to Facebook, which removed the post.
Overall, the process was cautious but clearly defined by whoever reports a problem.
Two of the five complaints that led to action by the Cyber Report team involved the beheading of Scandinavian tourists in Morocco by Islamic State supporters. One involved images from the murder of Bianca Devins, a 17-year-old girl from New York state, and the final pair involved the Christchurch attack footage — one of which was submitted by The Times.
Of the five, one site has blocked access (153news), two sites or their hosting provider removed the material, and two sites have not yet responded.
Given that limited effect, the question Australia’s approach still can’t answer is whether governments that are eager to act can muster a more robust, transparent and careful form of internet cleanup.
“It’s tremendously important for humankind that we find ways of making and enforcing norms of behavior online,” Benesch said. “And companies have not been much help.”