sign up log in
Want to go ad-free? Find out how, here.

Artificial intelligence expert Mark Laurence calls for a non-partisan New Zealand government-led AI strategy

Business / news
Artificial intelligence expert Mark Laurence calls for a non-partisan New Zealand government-led AI strategy
ai
Photo by Igor Omilaev on Unsplash.

By Gareth Vaughan

Artificial intelligence (AI) should be a key election year issue especially given the technology has major potential to help improve New Zealand's productivity, says Mark Laurence.

Laurence, founder and CEO of Ten Past Tomorrow which is an AI consultancy and education business, spoke to interest.co.nz in a new episode of the Of Interest podcast.

"I'm kind of flabbergasted that it hasn't become a political talking point," Laurence says, noting AI "has become a really hot political topic" in the United States over the past six months.

He describes AI as "a general purpose technology."

"My focus is how does New Zealand, as a small, educated, economically prosperous and politically stable country, how do we become the best users of this technology where we as a nation, we're very skilled and very literate and know how to use it, know when to use it, know how to use it responsibly and ethically?"

"Because you can scale from the individual productivity to national GDP on a very clear line."

Laurence points out Singapore is spending NZ$1.25 billion over five years with the goal of tripling their AI practitioner workforce. The United Kingdom is investing US$500 million per year over the next five years with the goal of having 10 million AI literate workers by 2030. And Finland is spending €100 million per year for the next four years in AI readiness training.

So does he think getting a more AI literate NZ population needs to be government led?

"I do [think so] and I think importantly it needs to be non-partisan," Laurence says.

" Whichever party wins [the election], this needs to happen. It's like to me, it's that critical to New Zealand productivity challenges. And so yes, it absolutely needs to be publicly led."

However, he adds that in the countries making public investment he cites, private investment generally "floods in behind it."

"We [NZ] have an AI strategy which was released last year. It's pretty flimsy and really if you kind of read between the lines, it's basically saying at the moment we're leaving this to the private sector to kickstart. I do think the stimulus needs to come, the action needs to come, the motivation needs to come, from public sectors," says Laurence.

"Simply, this nation has an obsession with productivity challenges that we've developed in the last number of years. That's why I say sitting still is not a neutral option, it's a decision with consequences. The gap compounds [and] moves from being a gap to actually a chasm."

In the podcast audio Laurence also talks about how NZ businesses are working with and thinking about AI, AI training, education opportunities from AI, guardrails and regulation, the previous technological breakthrough he compares AI with, how the effect and harms of AI on children could be worse than social media, why he says "AI is going to make lazy people super lazy and it will give dedicated people superpowers," and more.

*You can find all previous episodes of the Of Interest podcast here.

We welcome your comments below. If you are not already registered, please register to comment

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

47 Comments

Why? Real world use cases for AI are limited and not fully established. 

Up
1

SINGAPORE: A new National Artificial Intelligence (AI) Council will be established and chaired by Prime Minister Lawrence Wong to coordinate and drive Singapore’s AI strategy.

The council will oversee the development and execution of “AI missions”, said Mr Wong on Thursday (Feb 12) in the Budget 2026 statement.

“These missions will drive AI-led transformation in key sectors of our economy, and push the boundaries of what is possible for Singapore and for the world,” said Mr Wong, who is also the finance minister.

The missions will focus on four sectors: advanced manufacturing, connectivity, finance and healthcare. 

https://www.channelnewsasia.com/singapore/budget-2026-national-artifici… 

Up
5

Winners in business will take the advantages A.I. offers seeking opportunities for advancement, those who don't will loose competitive advantages. This applies at all business scales and national scale. The statement 'Real world use cases for AI are limited and not fully established.' is not a reason for non action. If people use a.i. in attempting to solve problems, gain knowledge, explore previously unknown (to them) possibilities and test them based on a.i. analysis of proposals, means of implementation, and outcomes instead of proceeding only by trial and error into the future their refusal to use a.i. could lead to avoidable mistakes that could cost them dearly in time and money (I'm speaking from experience e.g. in building too small a heat exchanger because I couldn't do prior to ai being available the calculations that were fast and easy using a.i.).

A.I. is not creative and only operates on ideas and information (some false) that it scrapes from the internet and it makes mistakes, so a user needs to critically scrutinise its answers and know when to pull the plug on its hallucinations and errors. Starting questions in a different stream so that it doesn't loop back to previous bs answers is necessary to hammer out the information field and the limits of what is known, rational and worth further investigation and investments of time and money before you get to the build stage.

Up
6

I agree. There are few "experts" on AI within existing businesses. It's often an HR or IT existing staff member who might not know much, and probably doesn't know how it can be applied to existing roles. But that means that there's some genuine opportunity to quickly develop in the space and this can become a point of difference in the job market. Or it might create new roles or business opportunities. 

Up
2

I work in an office role and I can tell you the real world use cases are vast. AI can already write code, write reports, write plans, summarise, organise and advertise better than people. It's increasingly able to combine these abilities to carry out complex work without oversight. Office work is going to be cut by half or more, unless we install artificial roadblocks.

Notably this kind of work tends to be an intermediate input into the economy wide production process, and I think there will be significant bottlenecks on both supply and demand sides that will limit AIs ability to boost economic output - but it will enable us to dramatically reduce labour inputs, if we wish.

We're going to need to show a lot more imagination and ambition in our policy settings to get the best of it though. Policies that promote reductions in work hours would be a good start. Else we're looking at a choice between wasting the labour-saving potential of AI, or a painful increase in unemployment. AI will also significantly increase the urgency of building a tax system that inhibits rent seeking and actively stabilises or even reduces wealth inequality. 

 

Up
9

 AI can already write code, write reports, write plans, summarise, organise and advertise better than people. 

All this is functional. At the end of the day, if there is no value in the output, it doesn't matter if the slop is produced by human or by AI. 

You could have the most fancy pants postgrad degree in AI and have completed an internship at Meta or Accenture, but if you don't understand what problem is to be solved, any implementation of AI wizardry is pointless and meaningless. I encounter grandiose claims on a daily basis now from corporate grifters and conmen.

In a new analysis of a survey published by the National Bureau of Economic Research and highlighted by Fortune, around 90 percent of the nearly 6,000  interviewed CEOs, chief financial officers, and other top executives at firms across the US, UK, Germany, and Australia, said that AI has had no impact on productivity or employment at their business.

To be clear, the question was about AI’s impact generally, and not just from implementing it in the workplace. But around 70 percent of the firms reported actively using AI, meaning the vast majority of them are admitting that adopting the tech hasn’t budged the needle for them yet.

https://futurism.com/artificial-intelligence/survey-ceos-ai-workplace

  

Up
3

this is true, never have we had the ability to build the wrong solution faster.

But AI allows those in the know to build the right solution much much much quicker

like the $20k of tokens that built a c compiler that can compile most major open source and linux etc

most people just do not have visibility of multi agent mcp empowered AI systems builds, this stuff is now really really good.

 

people will clear their old backlog items as implementation will now be fast.

as the price of intelligence falls (think minimum wage) demand will go exponential, image if you could employ people at $1 a day you would employ 100...     this is AI

 

Up
7

We're going to need to show a lot more imagination and ambition in our policy settings to get the best of it though. Policies that promote reductions in work hours would be a good start. Else we're looking at a choice between wasting the labour-saving potential of AI, or a painful increase in unemployment. 

 

AI can/should replace over half of government servants for starters. Policy analysis is mainly about being able to research and reference adequately - and then document those research finds - AI does all that just fine.  A single policy analyst job now takes one-quarter of the time it used to - as such researchers become more skilled in the use of the tool - less than 10%.     

AI will be more of a threat to the professional (white collar) classes, than the labour classes.  The biggest thing any government needs to think about is what is it going to do with managing a lot, lot higher percentage of unemployed people and a great deal of short-term work for another large swathe of the population. 

 

  

Up
7

AI is really good at working inside a bounded set of rules - which is a working definition of the legal system.

I'm just waiting to see what happens there. 

Up
3

Is gonna be interesting to use AI to cross check judges' Reasoning documents . 

Up
2

Co-pilot seems to be a pretty good patch to put on top of a variety of enterprise systems and allows you to then search the systems for information and can summarise, provide links, reason, then present results etc. Sounds basic but I haven't worked anywhere where this was possible before. It definitely improves productivity and gives time back. At least it will be a sinking lid, as staff leave naturally they might not be replaced. 

Up
1

Office work is going to be cut by half or more, unless we install artificial roadblocks.

Speed of uptake is key, plenty of businesses  install their own oblivious roadblocks to taking advantage of new tech.

I expect the change to be gradual in the labour markets, which will adapt.

Moreso attention should be on safeguarding and educating the less tech savvy (people and businesses) on how to avoid being scammed by AI generated materials, and cyber security issues

Up
0

"Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone" Matt Shumer

Up
7

The real world use cases are growing by the day exponentially. We should be discussing this as at a national level, the system is not designed, nor prepared for a solid uptick in beneficiaries from lack of work. We are already seeing businesses adopt it to 'tide them through' tough times, when they will never replace the workers displaced.

Up
3

The real world use cases are growing by the day exponentially.

Which doesn't necessarily mean anything. If new use cases don't add anything in terms of value or efficiencies, who cares? 

ML and NLP have been around since the 1950s. They have been used to create value and efficiencies but most people would be blissfully unaware or yawn. 

Now that the 'AI' tag has been applied, people are all of a sudden interested in magic box solutions. Many of which are garbage.  

Even Aotearoa's gift to open source software R has long had ML capabilities, but most Aotearoans either don't know what R is or simply don't care. 

Up
1

Big step from kNN in R/Python trained on some bespoke data to LLMs.

Up
2

Which is why, IMO, the implementation of AI on private data sets like Alibaba will have (and what the public doesn't) might have is an example of real use cases and value. 

Anyway, R can be used effectively with LLMs, both via hosted APIs and via Python/Hugging Face bridges.

Up
0

I have seen AI lean to code on a 4gl language that is not on the web anywhere...        it does not need much training just mcp access to data structures and compiler messages and language syntax

it documents modules in both coder speak and domain knowledge, ie business function.

it can then find current bugs and suggest fixes

what is still a problem is that you ask it 3 times to code something and all three will pass the tests it wrote etc but the code itself will have subtle differences.

I have seen it convert delphi into .net and use the old delphi as comments and also add comments, it passed tests.

this is game changing and job destroying

 

 

Up
3

That is not in dispute. What I am suggesting is following:

1. Do you have a real problem to solve or are you deluding yourself into thinking that AI is able to anticipate any future problem. 

2. If you have an outdated or inferior tech stack, yes AI can be part of a solution. But it doesn't come out of a magic box like many people seem to think it does.  

Up
3

little hint most big corps have a lot of old tech stack.... that is full of risks and non compliance to modern standards.

look at SaaS capital collapse, many of these have great tech stack

 

Up
2

Yes. The energy exploration industry is using tech stacks that go back to the 80s. I know of someone who's working on a machine learning-based solution to rectify that. And yes, some of the coding work is being handled by AI. One of the issues the creator is having is that investors don't really understand the domain; the technology; and how to value the solution in terms of ROI.   

Up
0

well show me a CEO who understands the tech stack....

you should watch level 1 risk trying to explain to level 2 risk what an SQL injection vulnerability is and why its bad when staring at an old tech stack where its everywhere... why can't you fix this in the next 2 weeks... as a BAU task

 

by the time it gets to CEO its "the software no workee"

Up
4

My experience with A.I so far is No to ' AI is able to anticipate any future problem.'

It can't do 3 d design and can produce shockingly bizarre outputs when given questions requiring it.

It can't make any sort of 'intellectual leap' to a different solution that is obvious, often simpler and lower cost.

It doesn't have a brain, but it is very useful in helping solve problems in designing physical mechanisms, calculating energy requirements to modify environmental conditions during climate extremes in nursery plant propagation and  I would imagine thousands of other uses across the nation.

I haven't yet tried using it to analyze spread sheets of parent feijoa plant performance to help identify which plants seedlings among hundreds or thousands in my breeding program are likely to have the best traits for say long store chilled export fruit to international markets.  Years ago I tried and generated many spread sheets and found (sort of) that certain parents were more likely to produce seedlings with longer storing fruit, but little beyond what I already knew from just doing the work of chilled storage trials and fruit testing. A.I. might be useful for this but given the hundreds of compounds in their fruit and their differences in the presence, or absence and ratios from different seedlings fruits this is a tall order.... but if there was a very fast high volume machine capable of analyzing hundreds or thousands of fruit per day- a system of say bar coded labelled fruit- visually scanned- weighed- photographed from multiple angles- dimensions measured- graded- sample taken and macerated- fed into ultra fast mass spectrograph-  DNA sequence determined- data output stored-complete comparative analysis of compound presence and mass ratios and then DNA genome then the A.I. could presumably do some vast calculations to determine the probabilities that parent A crossed with parent B has a 1% chance of producing a seedling offspring that produces fruit that store for for 113 days, or has 27% better Phytophthora root rot resistance than the best currently planted variety available.....I want such a machine but can't afford it, not even the fruit scanning gear.

Up
2

But you have a clear relevant use case. FYI, AI can already do a lot of 3D design work, but it functions more as a powerful assistant or generator than a full replacement for human 3D designers. It can generate full 3D models from a text prompt in seconds.

Up
3

If new use cases don't add anything in terms of value or efficiencies, who cares? 

Businesses care, as they get value by laying off staff due to the efficiencies they will get via AI, and as more adopt AI tools to harness, others will look into it through word of mouth in business circles (most tangible). Therefore workers care as they need a job to pay the bills and the mortgage, and if this may impact that, they should care. 

Up
2

That is my point. If you don't need people and can run your business on AI agents, all power to you. Caring about the welfare of people is a different issue. If that welfare is part of your business' reason to exist, I understand. But tech for tech's sake does not guarantee efficiencies or value.  

Up
4

hence the introduction of the steam engine or industrial revolution could not be stopped.

but this AI will occur 10 times faster and take out high paying knowledge works = mortgage holders

how long until your job title is a risk factor when applying for a 30 year mortgage?

IMHO bigger issue then climate change here

 

Up
6

IMHO bigger issue then climate change here

Come on.. here as in, within your office? I'd hope so

Because global scale certainly not

Up
2

I agree  there will be job losses faster then forecast growth

most people are not first hand using this at scale so just do not even understand its disruptive ability

 

Up
5

Its probably too early for the government to be able to offer anything of any real value (other than throwing around money). 

Perhaps funding for courses that teach AI (if there isn't already enough funding), but even then I don't think we will need that many that understand how AI works in NZ, it will be the big global companies that do most of that. Its really just a matter for businesses to embed it into their systems, and the pay off should be big enough to not need government support. 

Up
0

I'm sure in a few years government will fund some well-meaning courses for the general populace. Unis are now already getting "AI" into course and degree names.

In the meantime if you're curious just use YouTube to give yourself an education and practice on some real life tasks. 

Up
5

Neither are most businesses, and especially the largest employers - eg Te Whatu Ora have barely moved on from processing their data with pen and paper let alone integrating AI health professionals. 

Disruptive perhaps, but don't underestimate how slow most businesses and people are to adopt new tech.

Up
3
Up
3

Really good reference. Cheers. BTW, did you notice this:

That said, AI is still in its infancy and it wasn't all a slam dunk. In fact, half of the tested models failed outright - often due to basic coding issues like referencing nonexistent packages or mishandling data formats. R code proved more reliable than Python in this setting.

Up
2

it was a one shot test

 

Up
0

In NZ? No way

And that's a link about a research lab?

Up
0

Here is the blog post research from Citirini Research about the AI apocalypse that is being citied as triggering another AI-driven selloff in US equities. The Goldman Sachs Software At Risk Basket fell 6% today and is now down 33% year to date. 

IBM's share price clobbered over -10% after Anthropic announces that Claude can streamline COBOL code.

What follows is a scenario, not a prediction. This isn’t bear porn or AI doomer fan-fiction. The sole intent of this piece is modeling a scenario that’s been relatively underexplored. Our friend Alap Shah posed the question, and together we brainstormed the answer. We wrote this part, and he’s written two others you can find here.

Hopefully, reading this leaves you more prepared for potential left tail risks as AI makes the economy increasingly weird.

https://www.citriniresearch.com/p/2028gic

Up
1

NZ is too busy arguing about school books in Maori and who offended Winston at the Northern Club to do anything about AI.

Tabula rasa for the robots down here!

🥂

Up
3

AI doing the wrong thing correctly. 

Up
1

There are people out there now BAs or domain experts that with ai will become future developers because they can specify the problem and wanted outcomes.

Great BAs will win

 

Up
1

Peter Thiel says it's golden times for people with philosophy education 

Up
1

Agree. It's a 'low IQ' tool that needs clever analytical creative human minds to ask it the right questions and guide it by persistent nudges (it goes astray often.. by design to fill the page?) toward helping us solve problems related to the real world. 

Up
2

Business Analysts or Bachelors of Arts?

Up
1

Critics of AI have a short memory. Remember how bad search engines were back in the late 90s compared to what they're capable of now.

LLMs are capable of learning almost anything, and that's all that matters. Iteration will continue to improve their abilities expotentially. 

Up
1

Remember how bad search engines were back in the late 90s compared to what they're capable of now.

Search engines are possibly worse now. Enshittification. They're designed to be worse.  

Up
0

Given how comprehensively the A.I. djinn is out of the bottle, what happens next is beginning to look alarmingly like social Darwinism: forced evolution and adaptation - or failure. 

The worry is activities, like health, where collapse can't be allowed to happen and demand for resources, comparatively, just keeps growing because change is too slow. 

Up
1