What is the dead internet theory?


Exactly three years ago today, Elon Musk tweeted that 90 percent of his tweet replies on what was then called Twitter were bot accounts. His insistence that Twitter was essentially one large bot farm was one of his main sticking points during his Twitter acquisition. Bots, particularly porn bots, have continued to be a problem on Musk's X, so much so that even OpenAI head honcho Sam Altman took notice.
“I never really took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter(sic) accounts now,” Altman tweeted this week.
This Tweet is currently unavailable. It might be loading or has been removed.
If Altman's comments instantly called to mind the "We're all trying to find the guy who did this" meme, you're definitely not alone.
As the CEO of the world’s most popular AI chatbot, Altman received a decent amount of roasting for his post. “That’s a really good observation — here’s why you’ve nailed it…” one user posted, mimicking ChatGPT's signature tone.
But Altman's post raised another question for a lot of users — what is dead internet theory anyway?
The origin of the dead internet theory
For the uninitiated, the dead internet theory states that an increasing amount of the content on the Internet is generated by machines, not people. That includes everything from the news you read to posts on Reddit and whole social media accounts on places like Facebook, Instagram, and X. The theory further mentions that algorithms curate this content so that you see it more often, lowering your ability to engage with real humans online.
The exact origin of the theory is difficult to pinpoint, but some people point to this forum post on the Agora Road’s Macintosh Cafe by user IlluminatiPirate, who built off of a post by another user. In the post, IlluminatiPirate posits that the bots started running the show around 2016 when they noticed a bunch of odd conversations on the notorious online forum 4chan.
The dead Internet started as a conspiracy theory, but it started picking up traction when publications like The Atlantic picked it up in 2021. Soon, it was being actively discussed on places like Reddit. Elon Musk fueled the fire even more during his 2022 bid to purchase Twitter when he claimed the site was hiding how many bots were truly on the platform.
Undoubtedly, bots have been a massive problem on the internet for many years. Bots in various forms have been used for black-hat SEO, misinformation campaigns from intelligence organizations, and good old-fashioned cybercrime.
Where does AI come into all of this?
In addition to AI-directed bots, the dead internet theory has evolved to include the existence of AI slop — the term for lowest-common-denominator AI content, often produced en masse by content farmers.
Think of the crappy AI image memes that go viral on Facebook, or the cheap AI videos that go viral on Reels, TikTok, YouTube, and X. AI slop comes in many forms — Studio Ghibli memes, animal videos, AI celebrity baby videos — and it's becoming ubiquitous on social media. The end result is an internet filled with low-quality and often manipulative AI content, crowded with fake comments and likes from AI porn bots, and surrounded by AI video ads. In this case, zombie internet might be a better term.
Think also of the claims that students are using ChatGPT to complete homework assignments, which were created by teachers using ChatGPT, and which will be graded using ChatGPT — all with virtually no human involvement.
The dead internet theory may seem a bit far-fetched, but bots have been a persistent enough problem on social media that honest-to-goodness studies have been done about the phenomenon, and tutorials have been written to help people identify when they’re arguing with a real person or a random AI bot from a Russian troll farm.
To date, no one is entirely sure how many bot accounts exist on social media since every platform, including Musk’s X, keeps those numbers close to the vest. However, there has been a noticeable uptick in bot activity just about everywhere, so maybe the theory is starting to become real after all.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.