sign up log in
Want to go ad-free? Find out how, here.

AI-generated spam may soon be flooding your inbox – and it will be personalized to be especially persuasive

Personal Finance / analysis
AI-generated spam may soon be flooding your inbox – and it will be personalized to be especially persuasive
ai
AI may make spam more pervasive than ever. AP Photo/Gene J. Puskar.

By John Licato*

Each day, messages from Nigerian princes, peddlers of wonder drugs and promoters of can’t-miss investments choke email inboxes. Improvements to spam filters only seem to inspire new techniques to break through the protections.

Now, the arms race between spam blockers and spam senders is about to escalate with the emergence of a new weapon: generative artificial intelligence. With recent advances in AI made famous by ChatGPT, spammers could have new tools to evade filters, grab people’s attention and convince them to click, buy or give up personal information.

As director of the Advancing Human and Machine Reasoning lab at the University of South Florida, I research the intersection of artificial intelligence, natural language processing and human reasoning. I have studied how AI can learn the individual preferences, beliefs and personality quirks of people.

This can be used to better understand how to interact with people, help them learn or provide them with helpful suggestions. But this also means you should brace for smarter spam that knows your weak spots – and can use them against you.

Spam, spam, spam

So, what is spam?

Spam is defined as unsolicited commercial emails sent by an unknown entity. The term is sometimes extended to text messages, direct messages on social media and fake reviews on products. Spammers want to nudge you toward action: buying something, clicking on phishing links, installing malware or changing views.

Spam is profitable. One email blast can make US$1,000 in only a few hours, costing spammers only a few dollars – excluding initial setup. An online pharmaceutical spam campaign might generate around $7,000 per day.

Legitimate advertisers also want to nudge you to action – buying their products, taking their surveys, signing up for newsletters – but whereas a marketer email may link to an established company website and contain an unsubscribe option in accordance with federal regulations, a spam email may not.

Spammers also lack access to mailing lists that users signed up for. Instead, spammers utilize counter-intuitive strategies such as the “Nigerian prince” scam, in which a Nigerian prince claims to need your help to unlock an absurd amount of money, promising to reward you nicely. Savvy digital natives immediately dismiss such pleas, but the absurdity of the request may actually select for naïveté or advanced age, filtering for those most likely to fall for the scams.

Advances in AI, however, mean spammers might not have to rely on such hit-or-miss approaches. AI could allow them to target individuals and make their messages more persuasive based on easily accessible information, such as social media posts.

image of screen showing email inbox with 316 spam messages
Inboxes are already bursting with spam. Epoxydude/fStop via Getty Images.

Future of spam

Chances are you’ve heard about the advances in generative large language models like ChatGPT. The task these generative LLMs perform is deceptively simple: given a text sequence, predict which token – think of this as a part of a word – comes next. Then, predict which token comes after that. And so on, over and over.

Somehow, training on that task alone, when done with enough text on a large enough LLM, seems to be enough to imbue these models with the ability to perform surprisingly well on a lot of other tasks.

Multiple ways to use the technology have already emerged, showcasing the technology’s ability to quickly adapt to, and learn about, individuals. For example, LLMs can write full emails in your writing style, given only a few examples of how you write. And there’s the classic example – now over a decade old – of Target figuring out a customer was pregnant before her father knew.

Spammers and marketers alike would benefit from being able to predict more about individuals with less data. Given your LinkedIn page, a few posts and a profile image or two, LLM-armed spammers might make reasonably accurate guesses about your political leanings, marital status or life priorities.

Our research showed that LLMs could be used to predict which word an individual will say next with a degree of accuracy far surpassing other AI approaches, in a word-generation task called the semantic fluency task. We also showed that LLMs can take certain types of questions from tests of reasoning abilities and predict how people will respond to that question. This suggests that LLMs already have some knowledge of what typical human reasoning ability looks like.

If spammers make it past initial filters and get you to read an email, click a link or even engage in conversation, their ability to apply customized persuasion increases dramatically. Here again, LLMs can change the game. Early results suggest that LLMs can be used to argue persuasively on topics ranging from politics to public health policy.

Good for the gander

AI, however, doesn’t favor one side or the other. Spam filters also should benefit from advances in AI, allowing them to erect new barriers to unwanted emails.

Spammers often try to trick filters with special characters, misspelled words or hidden text, relying on the human propensity to forgive small text anomalies – for example, “c1îck h.ere n0w.” But as AI gets better at understanding spam messages, filters could get better at identifying and blocking unwanted spam – and maybe even letting through wanted spam, such as marketing email you’ve explicitly signed up for. Imagine a filter that predicts whether you’d want to read an email before you even read it.

Despite growing concerns about AI – as evidenced by Tesla, SpaceX and Twitter CEO Elon Musk, Apple founder Steve Wozniak and other tech leaders calling for a pause in AI development – a lot of good could come from advances in the technology. AI can help us understand how weaknesses in human reasoning might be exploited by bad actors and come up with ways to counter malevolent activities.

All new technologies can result in both wonder and danger. The difference lies in who creates and controls the tools, and how they are used.


*John Licato, Assistant Professor of Computer Science and Director of AMHR Lab, University of South Florida. This article is republished from The Conversation under a Creative Commons license. Read the original article.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

12 Comments

Fight AI spam with AI spam filters...? 

Don't really see how this is of an issue. The article pushes that intelligence will cause spam, but the same intelligence at the same level can also fight it.

Up
1

In this battle one side will have more resources than the other.  Compare the money spent advertising the drinking of alcohol with the money spent pointing out the dangers of its consumption.

Up
3

Do you really think Gmail will allow their customers to get obliterated by spam to the point that their service is unusable? 

Email providers are some of the biggest IT resourced companies in the world. Google, Microsoft to name the biggest. 

Up
0

G-mail's AI is already too intrusive, somehow my G-mail knows I have a boat insured, but I've never used G-mail for that insurance co

Up
1

People will soon work out that everything they engage with is more likely that not AI.

Perhaps we might see a disengagement with the screen and re-discover personal conversation and interactions.

 

Up
2

Plot line for the next movie in the James Bond franchise?

Up
0

Its a very real concern at the moment.

- In Ukraine AI drones that can make their own decisions autonomously are being developed, modified and used in real battles - they need no communication and can act faster much than decision making with a human controller or human opponent. BUT without proper oversight who knows how their algirithms deal with potential for  collateral damage and what is the the % tolerance when making a decision when the drone isnt 100% certain whether the target is fair game or civilian.

- In private markets for commercial AI - we have not only issues between US companies with huge pockets rushing beta AI into live systems - but as they accelerate development so China, Iran, Russia and other players will copy, steal and develop their code and experts and they wont be worrying about much apart from winning.

There are so many potential issues with all this stuff it fills books. Not having oversight is insane

 

Up
3

Putin is not a drone controlled by AI. Little does he care (same might be said for CIA drones),  so hard to see how  AI drones might be worse decision makers than us humanoids.

But i get your drift.

Up
0

We are already at the stage where someone with a PC with a half-decent GPU can run a quantised language model at home with no restrictions what so-ever like Meta's Llama. The days where this could be regulated are already gone.

Up
0

I will never fall for...                     Mmmmmmmm DONUTS

Up
2

Infinite bad takes are coming.

AI/ML is exceptionally overhyped. The content generation AI is not aware of anything it is actually doing. It is just takes a set of inputs across billions of parameters in a back propagating neural network. Great for very limited, narrow environments where the total number of classifications of objects is small.

Most of the useful AI methods are for filtering massive quantities of data to flag unusual data. Network penetration for cyber security, fraudulent transaction stuff, spam detection, facial recognition, spotting cancer/illness in x-rays/MRI scans etc. It provides quantitative analysis which a human can then provide qualitative analysis in verifying.

AI/ML is more or less just statistics with some layers added to look like a magical intelligence. It isn't, it is no less a trick than before.

Up
1

I've been testing chatgpt in my field for the past few weeks. 

There are many tasks that it can do in a tenth (or less) of the time I would.

Short term, that's great. I can take it easy and let it do some laborious writing stuff.

Long term, managers are going to figure out it takes much less time to do those jobs and adjust time allocated/pay accordingly.

I really think we are on the cusp of massive employment/social change on the scale of the industrial revolution. 

[Be aware that it can generate perfect looking references (good titles, correct journal and authors), but when you try and look them up in the actual journal they don't exist. It even lied when i asked if one was real. It said i could find it on Scopus but it was false. This will chnage when they're connected to the Internet though I think, which chatgpt has just announced)]

Up
0