sign up log in
Want to go ad-free? Find out how, here.

Could NZ’s next Christchurch Call be a push for fairer, safer AI?

Technology / opinion
Could NZ’s next Christchurch Call be a push for fairer, safer AI?
ai
Getty Images.

By Andrew Lensen, Ethan Plaut, Stephen Hill, Michael S. Daubs*

For New Zealanders, artificial intelligence (AI) is fast becoming as much a part of everyday life as smartphones and social media did before it.

According to the recently released 2026 InternetNZ Internet Insights report, nearly eight in ten Kiwis have used AI tools in the past year. More than half are now using them at least weekly.

But as use is rising, so too is unease about this transformative technology’s impact on society. In a recent survey, half of respondents were extremely or very concerned about AI’s implications for misinformation, privacy and potential misuse.

Other national surveys tell a similar story.

One found only a quarter of respondents believed current safeguards are sufficient to make AI use safe. In another, two thirds of those surveyed said they would stop using a company’s products if they had concerns about how it was using AI.

These views are not surprising. Major AI companies are increasingly entangled in everything from “deepfake” images and AI-generated misinformation to geopolitics and military applications.

At the same time, this widespread distrust could represent another opportunity for New Zealand to influence big tech – and build our own valuable brand grounded in responsible AI.

Who controls AI – and on whose terms?

The US-Israel war on Iran – where AI has helped identify bombing targets – has raised fresh concerns about the technology.

In the lead-up to the conflict, major AI companies were pressured by the US Department of War to allow widespread military uses of their AI systems.

Anthropic pushed for limits on applications like autonomous weapons and surveillance but was sidelined. Rival OpenAI instead agreed to allow broad “lawful” military uses, prompting a backlash and reports of users deleting the company’s ChatGPT at triple the usual rate.

China’s military is meanwhile leveraging its own AI-powered systems, while companies like Palantir, chaired by US billionaire and New Zealand citizen Peter Thiel, have reportedly supplied AI tools used by militaries in Ukraine, Gaza and Iran.

New Zealand’s defence ministry is now mulling its own approach, with parliament divided on the issue.

These developments highlight how closely advanced AI companies are becoming entwined with state power, blurring the line between consumer technology and instruments of war.

Aside from military use, these systems are also vulnerable to political pressures in the US, including government influence over how they are deployed and used. Research has shown the products can reflect the values and biases of their creators.

As they spread globally, they are also increasingly seen as a form of what has been called “digital colonialism” – where powerful countries and companies export technologies that embed their own values and priorities in other societies.

How NZ can be a leader in AI

For all the concern expressed by New Zealanders, the country has so far taken a “light-touch” regulatory stance on the technology.

Rather than create dedicated regulation, as a recent open letter from AI experts to political leaders has called for, the government has chosen to rely on a patchwork of existing rules.

As consumers, New Zealanders have little say in how these products are evolving, how they are designed or who they sometimes serve. This reinforces the common feeling that AI is something happening to us, but not for us.

It is also sometimes claimed the country is being left behind in the “AI race”, particularly by New Zealand business leaders concerned about keeping up with rapid technological change.

But there is another way for New Zealand, even with its limited scale and capacity, to make its mark in the AI world.

This would involve playing to its global reputation for integrity, human rights and independent thinking. Initiatives such as the Christchurch Call – launched after the 2019 mosque attacks to curb online extremist content – showed how a small country can convene governments and technology companies around shared standards.

In this case, New Zealand could strategically position itself at the forefront of a growing global push for responsible AI, which advocates for values such as fairness, accountability, safety and privacy.

The nation’s Māori data sovereignty movement is already an example of responsible data use. Māori values such as kaitiakitanga (guardianship and stewardship) reframe data as taonga (treasured or sacred assets) deserving careful protection.

Just as it did by drawing attention to social media harm with the Christchurch Call, New Zealand could collaborate with like-minded countries to push big tech companies to adopt concrete safeguards.

These could include measures such as watermarking and mandatory human oversight by a range of governance groups. This would also involve introducing standards for reporting environmental impact and auditing bias, ensuring AI aligns with New Zealanders’ expectations.

The government could work with industry to set clearer expectations for responsible AI – building on existing guidance for businesses on safe and ethical use – and invest in the development of local products that meet those standards.

There is also an economic opportunity.

Local companies could use this reputation to differentiate themselves in a global market where trust is becoming increasingly important. Research by global consultancy PwC suggests responsible AI can create real value, with more resilient systems and fewer trust-damaging failures.

Advocating for safe, responsible AI with clear economic benefits should be an easy decision – and the recent survey findings provide a clear mandate to do so.

But New Zealand won’t get there without decisive political leadership and a cohesive strategy. In an election year, politicians should be challenged to commit to AI that serves both its economy and its people.


The authors acknowledge the contribution of Dr Andrew Chen to this article.The Conversation


*Andrew Lensen, Senior Lecturer in Artificial Intelligence, Te Herenga Waka — Victoria University of Wellington; Ethan Plaut, Senior Lecturer (Communication) and Asst. Dean (AI for Teaching & Learning) Te Pūtahi Mātauranga | The Faculty of Arts and Education, University of Auckland, Waipapa Taumata Rau; Michael S. Daubs, Senior Lecturer in Media, Film, and Communication, University of Otago, and Stephen Hill, Associate Professor of Psychology, Te Kunenga ki Pūrehuroa – Massey University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We welcome your comments below. If you are not already registered, please register to comment

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.