sign up log in
Want to go ad-free? Find out how, here.

Regulating online hate speech and violence in the wake of the Christchurch terror attacks; What does the law say and what more can be done?

Regulating online hate speech and violence in the wake of the Christchurch terror attacks; What does the law say and what more can be done?

Following the Christchurch terrorist attacks how do we regulate the social media platforms that were front and centre of last week’s violent atrocity?

In recent years we’ve seen it time and time again where authorities are playing catch up with technology. From Uber and Lime Scooters to coming up with an international digital services tax to make multinational tech giants like Google, Facebook and Amazon pay their fair share.

And while these other examples appear far more trivial than last week's events which led to the deaths of 50 people in two Christchurch mosques, the same problem remains.

The exponential growth of technology continues to see central and local government playing catch up.

The terrorist perpetrator live streamed his actions via Facebook, before it was uploaded on YouTube and other social media networks. He also posted a manifesto outlining his extremist views online.

The ability of an armed offender to rapidly disseminate the live video of an attack illustrates just how easily such social media platforms can and are being misused.

The likes of Facebook, Twitter and YouTube have been at the heart of Silicon Valley’s social media revolution. But when the issue of regulation comes up and questions are raised about some of the objectionable content they host, they revert to type and once again say they taken the offending content down. The ambulance at the bottom of the cliff.

But last Friday’s terrorist attack shows how such companies now have to accept they are been used as a tool by extremists and are being used to radicalise people around the world and spread propaganda, from neo-Nazi white nationalists, to ISIS and Al Qaeda.

Just saying you’ve taken down a video after 50 people have been gunned down doesn’t cut it.

Under the Harmful Digital Communications Act 2015 it is an offence to intentionally cause harm by posting a digital communication which leads to "serious emotional distress". 

According to the Ministry of Justice, a person making a complaint about a host’s content will know within between 48 to 96 hours after a complaint has been made whether the content will come down or not.

“Just how long will depend on factors such as whether the host can contact the author of the content; whether the author agrees to your request; and how long the host and author take to see the complaint and act. These timeframes may seem quite long in the fast-moving world of digital communications, but they aim to balance freedom of expression with reducing harm.”

Such offending under the act is punishable by up to two years' imprisonment or a maximum fine of $50,000 for individuals and a fine of up to $200,000 for companies.

But when the gunman was killing and maiming people in Christchurch the threat of being charged under the Harmful Digital Communications Act would have been the last thing on his mind.

In an interview with TVNZ former NZ Prime Minster Helen Clark said social media platforms had been slow to respond to hate speech.

“If this man, or these men, were active on social media with hate speech, one would frankly expect that to be picked up, not only by our own services but frankly also by social media platforms,” she said.

“Social media platforms have been very slow to respond in closing down hate speech and accounts, now how much was this man active in the two years leading up to it that we’re told he was planning it? I do find it odd that he could post his messages alerting people to his manifesto signalling that he was about to do so, something horrible, this not to be picked up. Then he was live streaming it for what, 17 minutes? Facebook doesn’t close that down, I mean what is going on here?

“I think this will add to all the calls around the world for more effective regulations of social media platforms. And on their performance to date self-regulation isn’t cutting it.”

Wellington based human rights lawyers Michael Bott says we can use legislation to tackle the problem of hate speech and to deal with the social network providers that are hosting such content.

And the Harmful Digital Communications Act 2015 is fit for purpose, even if it might need some amendments.

“As with anything it needs to move with the times and be adjusted.”

But he says it’s up to the government of the day to make sure the law, as it stands, is enforced and the providers are keeping their end of the bargain as well.

“Google has a pretty useless track record in complying with New Zealand law in terms of take-down notices,” Bott says. “But I think the government needs to be more assertive in terms of monitoring extremist groups.”

And he says if that takes changes to the likes of the Harmful Digital Communications Act then so be it.

He says right-wing hate speech has grown rapidly online since the 2016 election of Donald Trump as the president of the US, along with Viktor Orban in Hungary, the high profile of Marine Le Pen's National Rally in France, and some of the supporters of the Brexit campaign in the UK, during which MP Jo Cox a supporter of the Remain campaign, was murdered.

“There has been a fixation in the west with Muslim fundamentalism and tarring a lot of people with the same brush. But the violent fringe elements on the right have had little attention paid to them. But hate speech isn’t the sole property of the Middle East.”

In November last year Netsafe released a report looking at the issue titled: Online Hate Speech: A survey on personal experiences and exposure among adult New Zealanders. The organisation is an independent, non-profit group which provides New Zealanders with information and support about online safety. 

The foreword states that authors Edgar Pacheco and Neil Melhuish were surprised to find it was the first report written on the subject in New Zealand.

It says the first accounts of online hate speech can be traced to the US in the mid-1980s when skinheads, Klansmen, and Neo-Nazi groups used primitive computer equipment to communicate and download electronic bulletin boards.

“Since then, initial research interest on online hate centred on the growth of hateful websites and the characteristics and dynamics of online hate groups, from white supremacists to terrorist groups, but also on the type and the persuasiveness of the messages disseminated.”

And the report says more innovative tools, such as social media platforms, have given rise to new challenges and concerns regarding online hate speech.

“In this context, as commentators highlight, online hate speech is low cost, can be facilitated anonymously and pseudonymously, is easy to access, is instantaneous, can reach a larger audience, and can be spread via different formats across multiple platforms. It also raises cross-jurisdictional issues in regard to legal mechanisms for combatting it.

“However, one of the long-term challenges has been the lack of an agreed understanding and definition of online hate speech.”

And this creates practical difficulties in terms of preventing or removing hateful content online.

“Equally important are the evolving perceptions of what constitutes online hate speech. For example, legal perspectives initially focused on expressions of racism and xenophobia online but research has also moved towards instances of online hate in relation to gender, disability, and sexual orientation.”

In an interview with Recode last week YouTube CEO Susan Wojcicki was asked about the use of the platform by extremist groups.

“We’re a platform that is really balancing between freedom of speech and also managing with our community guidelines.”

But Wojcicki admitted that the platform was being misused.

“There’s misinformation, there’s foreign government interference, there’s hate. There are many different areas that we’re focused on, and we’ve made a lot of progress. And I want to say there’s more progress to be made, I 100% acknowledge it.

“Well, first of all, we have community guidelines, and YouTube has had community guidelines since the very beginning, and those community guidelines include things like hate speech, and promotion to violence, and all kinds of other core areas that we believe in, but comments is a really important part of the platform for creators and fans, and they can always turn it off if they want to.

Like Facebook, YouTube relies heavily on machine learning algorithms to sift through the billions of videos on the platform.

But such issues haven't gone unnoticed by US law makers.

US Congressman Jerrold Nadler said during a Judiciary Committee hearing in December last year looking at the transparency and accountability of Google that the use of online networks and social media by hate groups was a serious problem.

“While Internet platforms have produced many societal benefits, they have also provided a new tool for those seeking to stoke racial and ethnic hatred. The presence of hateful conduct and content on these platforms has been made all the more alarming by the recent rise in hate-motivated violence. 

“According to statistics recently released by the FBI, reported incidents of hate crimes rose by 17% in 2017, compared to 2016, marking the third consecutive year that such reports have increased. 

“The horrific massacre at the Tree of Life Synagogue in Pittsburgh, the recent murder of an African-American couple in a Kentucky grocery store, and the killing of an Indian engineer last year in Kansas are, sadly, not isolated outbursts of violence, but are the most salient examples of a troubling trend. We should consider to what extent Google, and other online platforms, have been used to foment and to disseminate such hatred, and how these platforms can play a constructive role in combatting its spread.”

The Department of Internal Affairs considers the footage related to the attack as objectionable material, and therefore considered an offence to possess, share and/or host the harmful content.

If you are aware of online footage related to the attack report it to the Department of Internal Affairs.

 

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.