Prime Minister Jacinda Ardern is continuing to speak out strongly against social media platforms, as it becomes increasingly clear the Christchurch terror attacks were conceived of the internet, for the internet.
As well as expressing her condolences in Parliament on Tuesday afternoon, she said: “We will look at the role social media played and what steps we can take, including on the international stage, and in unison with our partners.
“There is no question that ideas and language of division and hate have existed for decades, but the form of distribution, the tools of organisation, they are new.
“We cannot sit back and accept these platforms just exist, that what is said on them is not the responsibility of the place they are published.
“They are the publisher, not just the postman.
“There cannot be a case of all profit, no responsibility.
“This of course doesn’t take away the responsibility we too must show as a nation to confront racism, violence and extremism.”
The accused killer, who Ardern said would be “nameless” when she spoke, live streamed a video of the shooting. Facebook, as well other social media platforms, struggled to delete it as it was re-uploaded more than a million times.
A lengthy “manifesto” was also circulated by the accused killer.
Ardern at a post-Cabinet press conference on Monday afternoon (see video at top of story) said that over the weekend she was told Facebook had removed 1.5 million uploads of the video.
The removal of 1.2 million of these was automatic, while the rest were removed manually. Facebook didn’t indicate how many videos it may have failed to take down.
“My view is there’s more that can and should be done,” Ardern said.
The bosses of New Zealand's internet providers have backed Ardern, urging the CEOs of Facebook, Twitter and Google to "proactively monitor for harmful content, act expeditiously to remove content which is flagged to them as illegal and ensure that such material – once identified – cannot be re-uploaded".
Spark managing director Simon Moutter, Vodafone NZ CEO Jason Paris, and 2degrees CEO Stewart Sherriff have made these calls in an open letter to Mark Zuckerberg, Jack Dorsey and Sundar Pichai:
You may be aware that on the afternoon of Friday 15 March, three of New Zealand’s largest broadband providers, Vodafone NZ, Spark and 2degrees, took the unprecedented step to jointly identify and suspend access to web sites that were hosting video footage taken by the gunman related to the horrific terrorism incident in Christchurch.
As key industry players, we believed this extraordinary step was the right thing to do in such extreme and tragic circumstances. Other New Zealand broadband providers have also taken steps to restrict availability of this content, although they may be taking a different approach technically.
We also accept it is impossible as internet service providers to prevent completely access to this material. But hopefully we have made it more difficult for this content to be viewed and shared - reducing the risk our customers may inadvertently be exposed to it and limiting the publicity the gunman was clearly seeking.
We acknowledge that in some circumstances access to legitimate content may have been prevented, and that this raises questions about censorship. For that we apologise to our customers. This is all the more reason why an urgent and broader discussion is required.
Internet service providers are the ambulance at the bottom of the cliff, with blunt tools involving the blocking of sites after the fact. The greatest challenge is how to prevent this sort of material being uploaded and shared on social media platforms and forums.
We call on Facebook, Twitter and Google, whose platforms carry so much content, to be a part of an urgent discussion at an industry and New Zealand Government level on an enduring solution to this issue.
We appreciate this is a global issue, however the discussion must start somewhere. We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content. Social media companies and hosting platforms that enable the sharing of user generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video.
Although we recognise the speed with which social network companies sought to remove Friday’s video once they were made aware of it, this was still a response to material that was rapidly spreading globally and should never have been made available online. We believe society has the right to expect companies such as yours to take more responsibility for the content on their platforms.
Content sharing platforms have a duty of care to proactively monitor for harmful content, act expeditiously to remove content which is flagged to them as illegal and ensure that such material – once identified – cannot be re-uploaded.
Technology can be a powerful force for good. The very same platforms that were used to share the video were also used to mobilise outpourings of support. But more needs to be done to prevent horrific content being uploaded. Already there are AI techniques that we believe can be used to identify content such as this video, in the same way that copyright infringements can be identified. These must be prioritised as a matter of urgency.
For the most serious types of content, such as terrorist content, more onerous requirements should apply, such as proposed in Europe, including take down within a specified period, proactive measures and fines for failure to do so. Consumers have the right to be protected whether using services funded by money or data.
Now is the time for this conversation to be had, and we call on all of you to join us at the table and be part of the solution.
Facebook released the statement below overnight Tuesday.
By Chris Sonderby, VP and Deputy General Counsel
Our hearts go out to the victims, their families and the community affected by the horrific terrorist attacks in Christchurch. We remain shocked and saddened by this tragedy and are committed to working with leaders in New Zealand, other governments, and across the technology industry to help counter hate speech and the threat of terrorism. We continue to work around the clock to prevent this content from appearing on our site, using a combination of technology and people.
We have been working directly with the New Zealand Police to respond to the attack and support their investigation. We removed the attacker’s video within minutes of their outreach to us, and in the aftermath, we have been providing an on-the-ground resource for law enforcement authorities. We will continue to support them in every way we can. In light of the active investigation, police have asked us not to share certain details. While we’re still reviewing this situation, we are able to provide the information below:
- The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast. Including the views during the live broadcast, the video was viewed about 4000 times in total before being removed from Facebook.
- The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended.
- Before we were alerted to the video, a user on 8chan posted a link to a copy of the video on a file-sharing site.
- We designated both shootings as terror attacks, meaning that any praise, support and representation of the events violates our Community Standards and is not permitted on Facebook.
- We removed the personal accounts of the named suspect from Facebook and Instagram, and are actively identifying and removing any imposter accounts that surface.
- We removed the original Facebook Live video and hashed it so that other shares that are visually similar to that video are then detected and automatically removed from Facebook and Instagram.
- Some variants such as screen recordings were more difficult to detect, so we expanded to additional detection systems including the use of audio technology.
- In the first 24 hours, we removed about 1.5 million videos of the attack globally. More than 1.2 million of those videos were blocked at upload, and were therefore prevented from being seen on our services.
- Member organizations of the Global Internet Forum to Counter Terrorism (GIFCT) coordinate regularly on terrorism and have been in close contact since the attack. We have shared more than 800 visually-distinct videos related to the attack via our collective database, along with URLs and context on our enforcement approaches. This incident highlights the importance of industry cooperation regarding the range of terrorists and violent extremists operating online.
- We identified abusive content on other social media sites in order to assess whether or how that content might migrate to one of our platforms.
We will continue to work around the clock on this and will provide further updates as relevant
Here is a video of Ardern's speech, delivered in the House on Tuesday:
And here is a video of the Leader of the Opposition, Simon Bridges', speech: