sign up log in
Want to go ad-free? Find out how, here.

AI is more a continuation of the industrial revolution than the Terminator film, says AI researcher Michael Timothy Bennett

Economy / news
AI is more a continuation of the industrial revolution than the Terminator film, says AI researcher Michael Timothy Bennett
AI

By Gareth Vaughan

2023 has become the year of AI. Hype and doomsaying about AI, or artificial intelligence, is hard to avoid.

A key catalyst was OpenAI's release of AI chatbox ChatGPT late last year. So should we be excited or fearful about the rise of AI, or both?

I discussed this with Michael Timothy Bennett, an AI researcher at the Australian National University, in the latest episode of our Of Interest podcast. Bennett is recently returned from a major Artificial General Intelligence conference in Stockholm where he both presented and won an award.

He described the mood at the conference as "feverish and exuberant," noting "suddenly there's a whole lot of money and power at stake" in the AI industry.

So what are we to make of all the hype around AI, and what might it mean for our lives?

"It's sort of the next step in the industrial revolution more than a lot of what we'd see in, say Terminator," Bennett says.

"AI is like a collection of black swan events that are going to play out over the next several decades as we see different sorts of jobs and industries hit with a lot of automation. Things will get much easier for some people and a lot harder for others."

In the podcast he talks about just what AI is, its origins, ways we've been using it for years, his take on predictions of AI-derived productivity gains and job losses, and whether the New Zealand government should be looking to regulate AI technology.

He also offers suggestions on how young people heading into the workforce or considering career options should think about AI, how middle aged workers should think about it, what it means for business owners, and how investors should be considering AI.

Bennett also weighs in on the debate over whether AI is an existential threat or could be humanity's salvation.

*You can find all episodes of the Of Interest podcast here.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

42 Comments

Said this months ago.  Its a black swan event and few seem to see it.

If you're on a PC all day and not using it, you're falling behind.

Thus TOP's UBI policy getting more relevant as time marches on.

Bring in UBI, adjust the tax system (which goes hand in hand), scrap the benefit system and let folk get on with it.

 

 

Up
12

I don't think it's that simple. I love the idea of a UBI but there's not enough evidence to say it would work in that way, especially if there are fewer actual jobs to be done. It doesn't seem like anyone has cracked that chestnut and we have a bunch of people with high debt relative to future earnings who are going to struggle either way. 

am however, incredibly keen to see how much of the country (and the public sector's) administrative burden can be automated and what potential savings that might open up.

As far as the doom and gloom goes: AI has the potential to refine things we have developed and take for granted and unlock efficiencies we didn't know we had, and unlock relationships (e.g. materials engineering, medical research) that might have taken years of funding and research to attain otherwise. A lot of the inefficiencies in our current lifestyles may become non-existent relatively quickly. I'll take being out of work while I retrain for a few months if it means the research pathway to things like cancer vaccines and major medical advances can be dragged forward by decades. Deal of the century IMO.  

Up
1

UBI  (9 MNS) and AI (12 MNS). The Opportunities Party Leader Raf Manji - Extended interview.

Raj leaves Nat & Lab leaders for dead on these issues.

'Who would make better decision.120 mp's or AI'?

At least TOP are future focused. 

 

Up
11

Ah yes, the future where we're all having to rent our own homes off the government. I'll pass, thanks. 

Up
3

..you think it thrills me? 

The other option is to give a tax free investment return to non homeowners of an equivalent value.

Okay with that or do you think us home owners have a god given right to dump an unfair tax burden on those without assets?

 

Up
9

Because you don't pay rates on your home already apparently.

The point is to tax unproductive things (houses) and reduce taxes on productive things (workers).

Up
5

Which becomes a fool's errand, given you need a high income to maintain a mortgage these days. The TOP calculator tells me that I'll be worse off - so after years to scrape together a deposit, going without to pay down a mortgage and incurring an even greater tax burden than I do now, I would then have to rent the house that I've put the effort in to buy back from the government, who are responsible for most of the issues causing stupidly high house prices, because economists who confuse complexity for elegance think renting your own house off the government is a better idea instead of... you know, fixing the underlying problems the government itself has caused?

The kindest way I can put my response to that is "no".

Up
6

With Ai it might play out in a myriad of ways and nobody can know what to expect until it has already happened.

1. The problem is that the main development and biggest fundin g is in the military field where our biggest superpowers and economies both have everything to lose if they dont win the race to develop a fully autonomous, decision-making weapon system that can feed of as much data as possible and beat the other sides military AI. the requirement for human intervention in decision making would make any solution potentially slower and thus unacceptable, and any solution would need to be able to hack the other sides systems and deploy physical weaponary - to win. It is kind of like developing an electronic military coronavirus with minimal controls and hoping it doesnt escape early.

Darwinism tells us exactly how that part ends. 
 

2. Job wise - the IT industry will see to it that every potential application for AI is exploited as fast as possible.... i suspect a vast amount of of low paid support jobs - probably located in the emerging economies - will go first.

 

Up
6

I'm with you, rastus. Introducing a UBI within the next 12 months would get us ahead of the ball.  It's going to be a whole new brave world out there.  The important thing will be getting social settings and a jobs loss safety net in place ahead of the disruption that is so sure to come.

 

Up
2

Where does the money for this UBI come from? 
I really fail to see how an emerging technology correlates with give everyone free money from the magic money tree.

Up
3

.read the TOP policy.  It is a redistribution of tax burden, not an increase in tax, and the not inconsiderable savings of a slimmed down work ad income.. 

Up
3

(Personal insult and erroneous assumption removed, Ed).

Don't talk about AI if you aren't explicitly an developer/engineer developing the technology or at least a qualified engineer. If you don't understand the functioning of the underlying technology, you can't reliably comment on its implications at this stage.

AI has very specific narrow use cases which are largely downstream of data analytics, business intelligence and data warehousing. Lots of the Machine Learning stuff has immense cost to establish in both human capital and real capital, it is a goldmine if you can find the right application for the right tools.

I work in AI/ML, there is an immense amount of investor money flowing into fraudulent or ridiculous projects in the sector on this sort of hype.

Up
6

I know of psychologists who are using ChatGPT 4 to speed up and improve their report writing for when they diagnose patients - and they've been doing this on a daily basis since ChatGPT 4 was released. Getting the paperwork done faster and better will let them see more patients, or alternatively see the same number but with improved quality of life for themselves because the most time consuming and laborious part of their job can be done much easier. Avoiding burnout is important for highly specialised healthcare workers.

AI has very specific narrow use cases which are largely downstream of data analytics, business intelligence and data warehousing.

I don't think psychology fits into any of those boxes, sorry.

Note that they are not using AI to help diagnose the patients, merely improve the report writing. But AI assistance in the medical fields is only a matter of time and has the capacity to improve productivity in healthcare dramatically.

Up
6

And in education. Marking is the most time consuming of all tasks undertaken at university level - particularly if you want to provide good feedback specific to each individual assignment. And in the early years, much of the written work submitted also requires mark-ups. If educators can front-end AI with criteria and commentary - I can just imagine what a huge time-savings that would be.  So much so that as an educator one could schedule specific one-on-one tailored instruction to individual students.  So many possibilities for improved learning experiences.  Burnout is common in the education professions, particularly as class sizes increase.   

Up
2

It will just become an arms race - the function that calibrates the AI to evaluate the qualities of a statement (in an exam / paper etc) will be the same function used by students to 'disguise' their ChatGPT/AI generated work. Something like 'analyse these examples of my original writing, then answer XX question using my writing style, and add a few minor mistakes based on my mistake style. Make the level of writing appropriate for a high level student at XX year of university'. 

Up
4

Yes, there is that too.  A lot will depend on the nature of the assignments set. 

Up
1

That's funny: so students will use ChatGPT to write their university assignments (already happening), then staff will use AI to mark them (maybe in the future). The human contribution to the exercise is then zero.

Academic integrity cases are way up at universities due to cheating via AI. It's very easy to catch cheating, but notoriously difficult to prove - the students will just deny it, or make up an explanation.

Is it even possible to design academic assignments that can't be done by mindless application of AI? AI will fundamentally change the way universities and high schools operate, and possibly contribute to their demise. I'm worried that with the outsourcing of thought to AI, that humans will degenerate into a species of dummies, without knowledge, unable to think let alone write. We will drown in an internet increasingly swamped with misinformation and terabytes of AI-generated text that no one will read.

The future of work is in practical trades: building, plumbing, electrician, and so on.

Up
6

There is a theory with a fair bit of support that the reason human brains are diminishing in size is because we have been relying on writing to reduce the need to hold as much info in our heads.  Processing is as important, but not memory.

Perhaps AI will start gnawing away at the need for processing.

Up
0

I work in AI/ML, there is an immense amount of investor money flowing into fraudulent or ridiculous projects in the sector on this sort of hype.

Yes. A NZ-based market research start-up was referred to the commerce commission about their AI claims. Even a recognized global company is bandying around "AI solutions" but it's quite clear that they have nothing and simply driving meaningless hype. I sat in on their presentation and they couldn't get under the hood in any way whatsoever let alone open the hood to let people look around.  

Up
3

I can't talk about AI?  That's rather arrogant and condescending.

I know my area of expertise. I can run chatgpt across articles, advice, commentary, my own opinions and have a nice summary in a nano second of what I'm wanting to convey to others.  I have the skill to read the outcome first an verify if what I'm am getting is correct. I can train it to match my style of writing. 

I'm sure David will soon be dong the same for his morning briefings (if he's not already).  Before long they will be done by his AI while he sleeps.

Up
11

Question; Does your "corrected" summary feed back into AI to "learn" from?  If so how do you protect your IP?  Or does using AI make your IP available to all?  Reason for asking is that AI does not get updated corrections and data, it cannot grow it's database of correct information for others.    

Up
3

I'm really interested in this angle.

Like as AI generated text is then published up in the web and the AI model keeps growing and consuming more web data, surely it becomes a circular feedback with an unknown level of bias.

Furthermore, how does the AI model know that it isn't consuming spammed articles on the web that have been generated with a specific bias embedded in it.

Up
3

Like as AI generated text is then published up in the web and the AI model keeps growing and consuming more web data, surely it becomes a circular feedback with an unknown level of bias.

Yes. This is referred to as "model collapse" where later generation AIs are trained on the output of earlier ones, so they end up being detached from reality. https://decrypt.co/144271/ai-learning-from-ai-is-the-beginning-of-the-e…

This gives big tech companies that have troves of human data to train from - think Facebook content from the company's inception through to 2022 - an advantage when it comes to training new AIs as they can be much more confident in the nature of their data source.

Up
2

..hopefully it corrects.  In which case everyone benefits. 

Knowledge is power.

At the moment knowledge is corralled via Govt gatekeeping and such things as Legal databases and big Law.  I cant wait until Chatgpt has the ability to access very piece of NZ case law and legislation. The big legal boys will become naked. 

The people will benefit. 

Up
1

Interesting.  As the keepers of the database (those fronting the capital cost to physically set it up, add data to it and maintan this database) have costs involved, how will those trawling through the database pay for the data obtained?

Nothing is free so much would you be prepared to pay (as opposed to buying those fancy law books)? 

Up
0

..hopefully it corrects.  In which case everyone benefits. 

Which is why 'she'll be right' and AI are not necessarily happy bed partners. If you think AI can be used to draft contracts, it does without saying that the contract needs to be reviewed. Just a single example.  

Up
0

 I cant wait until Chatgpt has the ability to access very piece of NZ case law and legislation. The big legal boys will become naked.

You can access case law and legislation if you wish. It doesn't necessarily mean you have any advantage over a legal professional. 

Up
0

Have a listen to this podcast, it answers most of your questions in non techno-babble.

https://www.realvision.com/podcast/realvision/episode/2a7ff5b0-02f9-11e…

A fairly plain english description of how ChatGPT4 works starts about the 15m20 mark, "Mary had a little"...

Up
0

If I add "Mary has a little": into my DuckDuckGo browser and search engine I get "Mary had a little lamb" result.  Very poor example of the capability of ChatGP. 

Also the claim there is no feedback of correct answers so that ChatGP cannot "learn" new data (tokens) meaning it will repeat wrong data.  mmmmmmmmmmmmmmmmm  hardly intelligent.  Do we believe the AI companies will only enter "true" data into the database?

Yet later he claims that when ChatGP will take corrected information (the capital of France is Munich question).  The first man on the moon question shows that presuppositions can have Yuri Gagarin as the first man to walk on the moon depending if you have an American or Soviet bend to your question.  

Scary stuff that the owners of the AI data can "configure" answers to reflect what they want you yo see.  Truth goes out the window.

 

 

 

Up
2

It wasn't a demonstration of the capability of ChatGPT, it was a very simplisitic demonstration/explanation of how it works.  

It seems you struggled to comprehend much of the podcast, and what AI is.

 

Up
3

Love the condescension.  Podcast went from denying that there was information feedback loop to later explaining yes there was a information feedback loop by the administrators to correct "false" information.  It seems you struggle to comprehend the shortcomings of ChatGP and the numerous IP data scrapping complaints.  But dream on.

Expect more of this to follow;

https://edition.cnn.com/2023/06/28/tech/openai-chatgpt-microsoft-data-s…

"The complaint also claims that OpenAI products “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.” "

 

 

Up
3

My previous comment stands.  You failed to understand the difference between context and neural network training.

It seems you struggle to comprehend the shortcomings of ChatGP

Incorrect again, I did not say anything about it's limitations, I simply addressed your incorrect comments. 

the numerous IP data scrapping complaints. 

Again, didn't address it, this is not intrinsic to AI, and is a matter for the owners of AI models to manage. This seems to be the root of the axe you have to grind.  Grind away. 

Up
0

AI generated text does not have a quality model of mind and is not capable of digesting the text properly. It is an imitation of speech and language, but the AI model does not fundamentally understand the schemas, properties or attributes of any word it uses. It has an understanding of the rules of language, grammar and so on. But it does not fundamentally understand what it is talking about.

What ChatGPT has done is resolved a niche issue that google search used to fill. It allows you to provide a contextually specific question on a subject matter, digest it very quickly and find the relevant answer very quickly. It has effectively beaten google for lots of subjects, but it is inherently untrustworthy, producing feedback and answers which are not verifiable.

These tools are flashy and amazing, but the model itself has no theory of mind or comprehension other than an imitation of the brain's neural network. It is not reliable enough to produce clean results on a reliable (better than human error of ~2%) rate. 

AI like this will have immense implications in computer vision and robotics (specifically in warehousing and agriculture). It has lots of applications in scheduling and other mathematically impossible yet persistent information management tasks. That is probably a few billion dollars of problem in New Zealand alone, yet tapping that is tough.

Up
7

That's a very narrow view on what ChatGPT "has done."

Up
0

Problem for what we think AI is, it that it can only scan and harvest data on the public internet portals.  If, like me, you withhold data from the public portals, purported AI cannot harvest it to get the widest and best available data to solve a problem.

AI should really be called IA.  Intelligent Automation.

Until AI can think it is nothing but IA.  And IA that can think or is sentient, is not available. 

Copyright infringements are a major problem for what we term AI, as are liability issues when date is used incorrectly.

Up
5

Problem for what we think AI is, it that it can only scan and harvest data on the public internet portals.  If, like me, you withhold data from the public portals, purported AI cannot harvest it to get the widest and best available data to solve a problem.

Right. And it doesn't have access to 'walled gardens' beyond scraping. 

Up
0

I used chat GPT to augment my own coding and spin something up in a couple of hours that I would likely have had to hire a freelancer to do. 

Up
7

I don't see the intelligence in the "AI".  Rather, it has immense processing capacity.  Huge capacity to do work for you. -- but but --  it's still dumb.

To explain think of your experience with Google.  Vast roundup of links made available to your question.  But you know one third of those will be stupid and you filter those out without even having to think hard.  You have done that automatically for years.

Then ask a question of Chat and you get a wonderful response.  Maybe a 1000 word explanation with listed points.  Very useful, plus plus, but you soon learn to adit it carefully.

Much better than Google, but equally dumb.

Not intelligent at all, just relentlessly hard working.  I know humans like that.  Very dangerous folk.

Up
2

And chatgpt just gives 'one' answer doesn't it?

Then you point out where it is wrong and it apologises and gives you a new 'correct' answer.

Up
1

Wherever there is structured data or text based policies (CCCFA, FSANZ standards, FTA etc.), AI interface could drive faster and well balanced outcomes or decisions. Given there is sufficient investments in computing, FTE resourcing dedicated to decision support analysing empirical info, will be soon labelled 'low skilled'. I am worried if my job...

Up
1

An AGI only needs to be as intelligent as a human to be a super intelligence, because it's memory and processing power/speed will be far superior to any human. Super intelligent AGI just got a whole lot closer with Chat GPT and others.

Up
0