sign up log in
Want to go ad-free? Find out how, here.

The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister

Technology / analysis
The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister
An example of shrimp Jesus. Shutterstock AI Generator
An example of shrimp Jesus. Shutterstock AI Generator

If you search “shrimp Jesus” on Facebook, you might encounter dozens of images of artificial intelligence (AI) generated crustaceans meshed in various forms with a stereotypical image of Jesus Christ.

Some of these hyper-realistic images have garnered more than 20,000 likes and comments. So what exactly is going on here?

The “dead internet theory” has an explanation: AI and bot-generated content has surpassed the human-generated internet. But where did this idea come from, and does it have any basis in reality?

A hyperrealistic image of a mantis shrimp with the face of jesus on it.
An example of a shrimp Jesus image on Facebook with no caption or context information included in the post. Facebook.

What is the dead internet theory?

The dead internet theory essentially claims that activity and content on the internet, including social media accounts, are predominantly being created and automated by artificial intelligence agents.

These agents can rapidly create posts alongside AI-generated images designed to farm engagement (clicks, likes, comments) on platforms such as Facebook, Instagram and TikTok. As for shrimp Jesus, it appears AI has learned it’s the current, latest mix of absurdity and religious iconography to go viral.

But the dead internet theory goes even further. Many of the accounts that engage with such content also appear to be managed by artificial intelligence agents. This creates a vicious cycle of artificial engagement, one that has no clear agenda and no longer involves humans at all.

Harmless engagement-farming or sophisticated propaganda?

At first glance, the motivation for these accounts to generate interest may appear obvious – social media engagement leads to advertising revenue. If a person sets up an account that receives inflated engagement, they may earn a share of advertising revenue from social media organisations such as Meta.

So, does the dead internet theory stop at harmless engagement farming? Or perhaps beneath the surface lies a sophisticated, well-funded attempt to support autocratic regimes, attack opponents and spread propaganda?

While the shrimp Jesus phenomenon may seem harmless (albeit bizarre), there is potentially a longer-term ploy at hand.

As these AI-driven accounts grow in followers (many fake, some real), the high follower count legitimises the account to real users. This means that out there, an army of accounts is being created. Accounts with high follower counts which could be deployed by those with the highest bid.

This is critically important, as social media is now the primary news source for many users around the world. In Australia, 46% of 18 to 24-year-olds nominated social media as their main source of news last year. This is up from 28% in 2022, taking over from traditional outlets such as radio and TV.

Bot-fuelled disinformation

Already, there is strong evidence social media is being manipulated by these inflated bots to sway public opinion with disinformation – and it’s been happening for years.

In 2018, a study analysed 14 million tweets over a ten-month period in 2016 and 2017. It found bots on social media were significantly involved in disseminating articles from unreliable sources. Accounts with high numbers of followers were legitimising misinformation and disinformation, leading real users to believe, engage and reshare bot-posted content.

This approach to social media manipulation has been found to occur after mass shooting events in the United States. In 2019, a study found bot-generated posts on X (formerly Twitter) heavily contribute to the public discussion, serving to amplify or distort potential narratives associated with extreme events.

More recently, several large-scale, pro-Russian disinformation campaigns have aimed to undermine support for Ukraine and promote pro-Russian sentiment.

Uncovered by activists and journalists, the coordinated efforts used bots and AI to create and spread fake information, reaching millions of social media users.

On X alone, the campaign used more than 10,000 bot accounts to rapidly post tens of thousands of messages of pro-Kremlin content attributed to US and European celebrities seemingly supporting the ongoing war against Ukraine.

This scale the influence is significant. Some reports have even found that nearly half of all internet traffic in 2022 was made by bots. With recent advancements in generative AI – such as OpenAI’s ChatGPT models and Google’s Gemini – the quality of fake content will only be improving.

Social media organisations are seeking to address the misuse of their platforms. Notably, Elon Musk has explored requiring X users to pay for membership to stop bot farms.

Social media giants are capable of removing large amounts of detected bot activity, if they so chose. (Bad news for our friendly shrimp Jesus.)

Keep the dead internet in mind

The dead internet theory is not really claiming that most of your personal interactions on the internet are fake.

It is, however, an interesting lens through which to view the internet. That it is no longer for humans, by humans – this is the sense in which the internet we knew and loved is “dead”.

The freedom to create and share our thoughts on the internet and social media is what made it so powerful. Naturally, it is this power that bad actors are seeking to control.

The dead internet theory is a reminder to be sceptical and navigate social media and other website with a critical mind.

Any interaction, trend, and especially “overall sentiment” could very well be synthetic. Designed to slightly change the way in which you perceive the world.The Conversation


Jake Renzella, Lecturer, Director of Studies (Computer Science), UNSW Sydney and Vlada Rozova, Research Fellow in Applied Machine Learning, The University of Melbourne.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

22 Comments

This explains some of the comments on property articles on this site... 😁

Up
10

Very interesting article thanks. 

I found the link to the disinformation article amusing. Bots could only dream of achieving the rate at which the Guardian can pump out its biased propaganda 

Up
11

Example of biased propaganda please. 

Up
4
Up
13

I've just checked the first 15 articles that appear on their frontpage. Which ones are  biased? 

Up
5

The ones Truth Social told him were biased of course.

Up
7

If you haven't understood why western governments are now increasingly desperate to control or mainstream and social media, you should get up to speed real fast.

Up
2

Righto, so you've never read the Guardian but just regurgitating hot takes and reckons. Enlightening. 

Up
7

I'm surprised some find the view that the Guardian seeks to influence people to hold opinions that in line with the progressive set controversial.

I'd say the same about any news source. I happen to read the Guardian daily, but then I'm not credulous

Up
3

I don't dispute that they have a progressive slant to their reporting but they also have very high standards for reporting facts in the same way the telegraph or Times has a conservative slant but also very high standards for reporting facts. That is healthy. 

This article is about something entirely different. It is about things that are factually wrong being presented as truth. To compare what this article is talking about to political slant on real reputable media is not healthy. It creates the impression that all truth is subjective and allows basic misinformation to spread. 

Up
4

The irony in this comment is off the charts.

Up
0

Pandora's box.

Up
2

Implicit in this article is the dangerous false assumption that an "authoritative" source of information exists.   

Up
8

You're overthinking it and getting into epistemological arguments there. Some sources are more authoritative than others.

If you throw an apple into the air you can be almost certain (but never fully certain) that it will fall to the ground. Is this because of the laws of physics or because invisible pixies jump up and throw it back down to earth. You rely on authoritative (but not infallible) sources of information all the time, you just don't realise you do.

What this article is pointing out is that AI can be used to challenge and manipulate people's faith in those sources of authoritative knowledge for personal gain. 

Your very statement shows how you're vulnerable to being exploited and may have already been targeted. 

Up
8

You’re getting confused between authority, and credence.  Some ideas are more credible than others.  Credible ideas are consistent with empirical evidence, and have consistently withstood criticism.  At no point does an idea or theory ever assume any authority.

I believe what this article is saying is that AI has no authority in regards to ideas because it’s not even a person, and sometimes comes up with silly stuff.  The internet is rife with content indistinguishable from AI generated content.  Therefore all opinions voiced on the internet need to be policed by someone (presumably you?) with the authority to say what’s true and what’s not.      

Up
5

Look out for the nVidia earnings call tomorrow morning. Governments and corporations are projecting spends of hundreds of billions on AI chips over the next few years as they see AI as critical for achieving their organisational objectives. Those objectives are rarely likely out of altruism and the goodness of their hearts.

Up
1

Generative AI is poisoning its own well.

 

Companies in search and social media should be most worried about this because it will curtail human engagement which will devalue their platforms.

Up
2

Yes this is a interesting point.  Once the content that AI trains itself on, is itself mostly AI generated content, will its answers have any worth to humans?  Will we see genuine creativity or garbage?

Up
1

Amusing how Russia is used as a scapegoat when US agencies continue to lie and cover up - and our own MOH regurgitated the CDC line.

@RepThomasMassie

"We know officials at @CDCgov lied about vaccine efficacy and natural immunity. I told@HHSGov

Assistant Secretary Egorin that continuing to withhold information about this cover-up is unjustifiable."

https://x.com/RepThomasMassie/status/1792567383602725121

 

Up
6

Right? It's always given that authoritarian regimes are using AI to spread disinformation. But I'm not convinced that non-authoritarian regimes aren't doing the same thing? It's a digital cold war enabled by AI. It's hard to believe anything on social media, and very many articles from reputable sources simply regurgitate social media.

Up
2

Disinformation has been around as long as information has, and governments all over have used this to their advantage to varying degrees. Remember the Weapons of Mass Destruction we were all told about from MSM and "reliable intelligent sources"?

Up
1

Its only disinformation if it doesnt suit the required narrative.

 

Up
1