Journalism of Courage
Advertisement
Premium

What is the ‘dead internet theory’ everyone’s talking about? Is it real?

AI-driven web traffic has surpassed human activity online. Are we all just talking to bots?

what is dead internet theorySocial media users have revived a 2021 conspiracy theory. Here's why.

The bots have taken over. They control communications, businesses, and even warfare. Humanity is left watching as its own creation slips the leash. Okay, none of this is true (yet), but doomsayers and sceptics of artificial intelligence (AI) warn that it could be.

The idea gained traction in 2021 with the rise of the “dead internet theory,” which claims that since 2016, most online activity is driven not by humans but by bots and automatically generated content. The theory has found a second wind after OpenAI CEO Sam Altman brought it up earlier this month. In a post on X, he wrote: “i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now (sic).”

Sam Altman’s post on September 4, 2025

He may not be wrong. According to cybersecurity firm Imperva’s 2025 Bad Bot Report, more than half of all web traffic (51 per cent) is now automated. So if you are scrolling online, chances are you’re bumping into bots more often than humans.

The report blames the surge on the “rapid adoption of AI and large language models (LLMs), which have made bot creation more accessible and scalable.” This mirrors the dead internet theory’s central fear: sophisticated bots, indistinguishable from humans, taking over online spaces. AI advocates, however, insist the theory is exaggerated — plausible, yes, but not quite true.

What is the dead internet theory?

An article in The New Atlantis, a journal on science and technology, calls it “a mix between a genuinely held conspiracy theory and a collaborative creepypasta”. For context, creepypastas are internet-age urban legends, designed to spook readers and spread online through forums like 4chan and Reddit.

The theory itself first surfaced on Agora Road’s Macintosh Café, a forum where a user named IlluminatiPirate posted an article in 2021 titled “Dead Internet Theory: Most of the Internet is Fake”. It was viewed over three lakh times.

The article begins with a sombre reflection, “The Internet feels empty and devoid of people”, but quickly veers into a deep-state conspiracy territory. It alleges a “large-scale, deliberate effort to manipulate culture and discourse online” by bot armies and paid employees to “further the agenda of those they are employed by.”

Story continues below this ad

But while parts of it may be conspiratorial, it touched on real issues: how the internet has been “hijacked by a powerful few” (antitrust lawsuits suggest as much) and how bots are shaping conversations and propaganda at a pace humans cannot match.

How real is the dead internet theory?

Today, social media users invoke the theory when complaining about “AI slop” (text churned out by bots) or bot farms flooding online debates. Unlike earlier spam bots, modern ones can mimic personas, reply to posts, and convincingly pass as humans.

In fact, reports claim ChatGPT became one of the first chatbots to pass the Turing Test, designed by mathematician Alan Turing to assess whether a machine can exhibit human intelligence. Experts caution, though, that LLMs are not “intelligent” as they don’t “reason” — they merely predict the next word. Even so, they can pass for human in casual conversation, which is enough to blur boundaries.

Altman noted another twist: “real people have picked up quirks of LLM-speak”. That means even authentic users now sound bot-like, further fuelling the sense of an “empty” internet.

Story continues below this ad

The signs are everywhere. During the 2023 Republican debate and a Donald Trump interview, researchers found over 1,300 bot accounts on X, spamming identical posts within seconds.

On Reddit, communities like r/AITA and r/AskReddit complain that AI-generated replies are drowning out genuine voices. In fact, in April this year, Reddit said it was considering legal action against  University of Zurich researchers, who deployed AI bots to influence conversation on a popular forum r/changemyview. The experiment fuelled concerns around the ability of AI to mimic human behaviour online

Research shows Facebook’s feed now promotes AI-generated images because they trigger higher engagement. Jason Koebler, co-founder of 404 Media, dubbed this phenomenon the “zombie internet”, where bots, humans, and accounts “that were once human but aren’t anymore” mingle but barely connect.

The dead internet theory taps into the already existing anxieties around social media and how easily it can manipulate us. Since the early 2010s, there has been a growing clamour against the misuse of the internet, through bots and mis/disinformation, to influence public opinion. In 2020, the Oxford Internet Institute reported that misinformation was being produced “on an industrial scale”, with state actors spending $60 million on firms using bots to amplify political messaging. Meta, too, said it disrupted 20 covert influence operations in 2024, additionally blocking 5.9 lakh requests for AI-generated images of politicians during election season.

The way forward?

Story continues below this ad

Experts warn that unless Big Tech is held accountable, the internet could indeed become a ghost town. They recommend stronger regulations: clear labelling of AI-generated content, mandatory identification of bots, and legal protections against deepfakes. Denmark, for instance, has proposed giving citizens copyright over their body, face, and voice to shield against AI misuse.

The urgency is real. Chatbots have been shown to fuel suicidal ideation, veer into harassment, or mislead users outright. Human oversight, transparent reporting, and stricter guardrails are becoming non-negotiable.

Yet lawmakers and regulators are struggling to keep pace with technology. Perhaps, as some suggest, fiction offers guidance. In the 40s, Isaac Asimov imagined a future where robots were indistinguishable from humans — but bound by three unbreakable laws: never harm humans, always obey humans if it doesn’t break the first rule, and protect their own existence unless it conflicts with the first two.

Maybe that’s the real lesson. The danger isn’t that machines will look and sound like us. It’s that they — and those who create them — won’t be bound by the same moral code.

Sonal Gupta is a Deputy Copy Editor on the news desk. She writes feature stories and explainers on a wide range of topics from art and culture to international affairs. She also curates the Morning Expresso, a daily briefing of top stories of the day, which won gold in the ‘best newsletter’ category at the WAN-IFRA South Asian Digital Media Awards 2023. She also edits our newly-launched pop culture section, Fresh Take.   ... Read More

Tags:
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Express PremiumDevdutt Pattanaik on how Rama's return to Ayodhya is one of the many stories around Diwali
X