What is the Dead Internet Theory—and why it went viral in the 2020s
The Dead Internet Theory is a conspiratorial hypothesis claiming that the overwhelming majority of online activity is generated by bots and AI algorithms, not living people. Real users supposedly make up only a small fraction of traffic, while corporations and governments use automated systems to manipulate opinion and control the information space (S001).
⚠️ Key claims of the theory
Proponents point to specific signs: identical comments under popular posts, accounts with minimal history that suddenly generate content, synchronized likes from thousands of profiles. Examples like "shrimp Jesus"—absurd AI-generated images with millions of views—are seen as evidence of a long-term strategy (S001).
While creating a convincing fake account once required significant resources, now a single person can control thousands of bots generating unique content through generative AI.
According to the theory, these accounts first build audiences with harmless content, then get weaponized to spread disinformation, political propaganda, or commercial manipulation. More details in the Conspiracy section.
🧩 Why the theory resonates
Its popularity stems not only from conspiratorial thinking, but from real observations: discussion quality is declining, algorithms show strange content, distinguishing profiles from bots is getting harder. Add to this documented cases of bot farms, election interference through fake accounts, and data breach scandals.
- Generative AI as catalyst
- ChatGPT, Midjourney and similar tools enable creating convincing text, images and video in seconds, blurring the line between "living" and "dead" internet.
🔎 Historical context
The idea that the internet is populated by more than just people isn't new. In the 2000s, "trolls" and "shills" were discussed—real people acting on commission. The turning point came in the 2010s with large-scale evidence of automated systems: "troll factories," bots influencing elections.
The formulation "Dead Internet Theory" emerged around 2016–2021 on anonymous imageboards like 4chan. By 2023–2024, the theory moved beyond fringe platforms and became a topic of discussion in mainstream media and academic circles—partly because some of its elements turned out to be eerily close to reality.
Steel Version of the Arguments: Five Most Compelling Cases for the Dead Internet Theory
Before critically examining the theory, it's necessary to present it in its strongest form — the so-called "steelman argument". This is an intellectually honest approach: first strengthen the opponent's position, then analyze it. More details in the Financial Scams section.
Below are the five most substantial arguments from dead internet theory proponents that genuinely deserve serious consideration.
📊 First Argument: Traffic Statistics Show Anomalous Bot Growth
Cybersecurity research in recent years demonstrates that a significant portion of internet traffic is generated by automated systems. According to various analytics firms, between 30% and 50% of all web traffic comes from bots — and not all of them are "good" bots (search crawlers, monitoring systems).
A substantial portion consists of malicious bots, scrapers, spam bots, and metric manipulation systems. On social networks, the situation is even more alarming: scandals periodically emerge revealing that major accounts have significant portions of fake profile followers.
| Platform | Official Estimate | Independent Research |
|---|---|---|
| Twitter (X), 2022 | ~5% bots | 15–20% bots |
| Removes billions of fake accounts annually | Scale of undetected accounts unknown |
If platforms are removing billions of accounts, how many more remain undetected?
🕳️ Second Argument: Content and Discussion Quality Is Degrading Exponentially
Long-term internet users note a persistent feeling: discussion quality is declining, original content is becoming scarcer, and algorithms increasingly show repetitive, templated, or outright meaningless material.
Comments under popular posts often look like collections of clichés, emojis, and shallow reactions lacking depth. Forums and communities that were once vibrant are turning into echo chambers with predictable behavioral patterns.
Theory proponents argue: this isn't simply the result of "eternal September" (the phenomenon where an influx of new users lowers the average discussion level), but rather a consequence of a significant portion of "participants" being bots trained to imitate human behavior.
AI-generated comments are becoming increasingly convincing, but they lack the genuine creativity, irony, and contextual understanding characteristic of authentic communication. The result — an internet that looks active but feels empty.
⚠️ Third Argument: Documented Cases of Mass Manipulation and Disinformation Campaigns
Compelling evidence already exists that social networks are manipulated by bots to influence public opinion through disinformation — and this has been happening for years (S001).
Scandals surrounding Cambridge Analytica, election interference in the US and Europe, operations to discredit political opponents — all of this is documented, investigated, and partially acknowledged by the platforms themselves. These cases demonstrate that the technology and infrastructure for mass creation of fake accounts exists and is actively used.
- If such operations are possible and profitable, it's logical to assume their scale is far larger than we know.
- Each exposed case is merely the tip of the iceberg.
- The bulk of manipulation remains undetected.
🧬 Fourth Argument: Generative AI Has Made Creating Fake Content Trivially Simple
Before the emergence of GPT-3, GPT-4, Midjourney, and similar systems, creating convincing fake content required significant resources: copywriters, designers, time to create unique texts and images.
Now a single person with API access can generate thousands of unique posts, comments, images, and even videos in an hour that will appear to be created by different people. This has radically changed the economics of bot farms: scaling used to be expensive — now it's nearly free.
Modern language models can imitate the style, tone, and even idiosyncrasies of specific users, making bot detection an increasingly complex task.
If the technology enables creating indistinguishable-from-human content at industrial scale, it's reasonable to assume this is already happening — and in far greater volumes than we realize.
🔁 Fifth Argument: Platforms' Economic Incentives Encourage Artificial Activity
The business model of most social platforms is based on engagement metrics: the more users, views, likes, and comments, the higher the company's valuation and advertising revenue.
This creates a perverse incentive: platforms benefit from inflating activity indicators, even if part of that activity is bot-generated. Removing fake accounts reduces metrics, which negatively impacts stock prices and attractiveness to advertisers.
- Recommendation algorithms are optimized to maximize time spent on the platform, not content quality.
- If bot-generated content retains user attention, the algorithm will promote it.
- This creates a feedback loop: bots generate content → algorithms amplify it → users engage → data trains new bots.
- In such a system, the boundary between "living" and "dead" internet genuinely blurs.
Evidence Base: What Research Says About the Real State of the Internet
From arguments to facts. Independent research, academic work, and platform data reveal the true scale of automation. We need to separate documented phenomena from speculation. More details in the 5G Fears section.
📊 Quantitative Data on Bots: From Traffic to Social Networks
Cybersecurity reports document high levels of automated traffic: 30% to 50% of all web traffic is generated by bots. But the structure is critical: a significant portion consists of legitimate bots (Google and Bing search crawlers, monitoring systems, availability checks).
| Platform | Official Estimate | Independent Research |
|---|---|---|
| Twitter (2022) | < 5% of active users | 9–15% |
| Facebook (2023) | 5–6% (1.5 billion removed) | Similar scale |
| Instagram, TikTok | Not disclosed precisely | Comparable to Facebook |
The gap between official and independent figures reflects methodological differences and platforms' incentives to minimize the problem.
🧪 Disinformation Campaigns: From Theory to Documented Operations
Social networks have been manipulated by bots to influence public opinion through disinformation for years (S001, S002). Case studies: Russian "troll factory" operations (2016 U.S. elections), anti-vaccine campaigns, attacks on journalists and activists, commercial manipulation.
The mechanism: sophisticated networks of accounts mimic organic behavior—posting neutral content, interacting with each other, building followers, then activating for targeted messaging (S001, S002). Entire armies of accounts can remain "dormant" for years before deployment.
This means disinformation infrastructure is built in advance as a long-term asset, not improvised as needed.
🧾 Visual Content Verification and Fact-Checking Challenges in the AI Era
The volume of claims requiring fact-checking exceeds what humans can manually process by several orders of magnitude (S006). Visual content is more influential than text and naturally accompanies fake news.
Generative AI has exacerbated the problem: creating convincing fake images, videos, and audio is now accessible to any user. The "shrimp Jesus" phenomenon—absurd AI-generated images with millions of views—demonstrates how easily attention can be manipulated.
Behind the apparent harmlessness may lie a long-term strategy (S001, S002): accounts build audiences with viral content, then pivot to serious manipulation. This connects to the broader problem of conspiratorial thinking and its spread through algorithms.
🌐 Web Evolution and the Centralization Problem: From Web 2.0 to Web 3.0
Web architecture is constantly being reimagined to handle massive data volumes (S011). Web 3.0—a decentralized architecture that's more intelligent and secure—addresses web data ownership through distributed technologies.
- Web 3.0 Criticism
- Decentralization could worsen bot and disinformation problems: lack of central control makes moderation and removal of harmful content more difficult.
- Web 3.0 Defense
- Cryptographic identity verification and blockchain transparency will help distinguish real users from bots.
Both positions remain largely theoretical for now. Web 3.0 is not mature and remains contested (S011). The actual outcome depends on how challenges of scalability, energy consumption, and social coordination are resolved.
Mechanisms and Causal Relationships: Why the Internet Is Becoming More Automated
Understanding what's happening on the internet requires analyzing the mechanisms that lead to observable phenomena. More details in the section Psychology of Belief.
This isn't a conspiracy — it's a natural consequence of technological progress, market incentives, and tool accessibility.
⚙️ Attention Economy and Engagement Metrics as Automation Drivers
Social platforms operate within an attention economy: revenue depends on user time on platform and engagement metrics (likes, comments, shares). Algorithms are optimized for attention retention, even if achieved through provocative or absurd content.
In such a system, bots become economically viable: they generate activity, boost metrics, create the illusion of popularity. For brands, it's an investment in visibility that pays off through advertising. For platforms, removing bots means lowering metrics. The result is a system that structurally incentivizes artificial activity.
🧬 Technological Progress: From Primitive Scripts to Generative AI
Early 2000s bots were primitive and easily detected. Modern bots based on large language models generate unique, contextually relevant content that's virtually indistinguishable from human output (S001).
Generative AI has lowered the barrier to entry: one person can manage thousands of accounts, each generating unique content and adapting to context. This is a natural consequence of tool accessibility, not a coordinated conspiracy.
🔁 Feedback Loops: How Algorithms Amplify Automated Content
Recommendation algorithms create feedback loops that amplify certain types of content regardless of origin. If a bot-generated post receives high engagement, the algorithm interprets this as a quality signal and shows the post to more users.
| Cycle Stage | What Happens | Result |
|---|---|---|
| Artificial Activity | Bots create likes, comments, shares | Post appears popular |
| Algorithmic Amplification | System shows post to larger audience | Organic interactions grow |
| Model Training | Interaction data used to improve AI | Next generation of bots becomes more convincing |
| Closed Loop | Bots learn from humans, humans interact with bots | Boundary between "real" and "artificial" blurs |
📊 Scale of the Problem: Numbers and Reality
Research shows that bots constitute a significant portion of social media activity. However, this doesn't mean the internet is "dead" — it means it's transforming under the influence of economic incentives and technological capabilities.
The problem isn't the existence of bots, but their integration into an ecosystem where platforms benefit from their presence and users can't distinguish automated content from organic. This creates information asymmetry that undermines trust in the internet as an information source.
🎯 Social Effects: Why People Believe in Dead Internet Theory
The growing presence of bots and automated content creates a sense that the internet is becoming less authentic. People notice repeating patterns, template responses, lack of depth in discussions — and this observation is valid (S004).
However, interpretation of this phenomenon often shifts into conspiracy: instead of seeing a system of economic incentives and technological capabilities, people search for a hidden agent — the state, a corporation, an AI uprising. This is a simpler explanation than understanding complex interactions between algorithms, bots, and human behavior.
- Notice the change (content has become less authentic) — valid
- Explain it through conspiracy — cognitively easier than analyzing systemic factors
- Find "evidence" of conspiracy — confirmation bias in action
- Spread the theory — social validation reinforces belief
🔍 Distinguishing Between Fact and Interpretation
Fact: the proportion of automated content on the internet is growing. This is confirmed by research and observable in platform behavior.
Interpretation: this is the result of a conscious plan to capture the internet by AI or the state. This is an assumption that goes beyond available data and requires belief in a coordinated conspiracy.
The mechanisms of internet automation aren't a mystery, but an open system of economic incentives, technological capabilities, and algorithmic feedback loops. Understanding these mechanisms allows critical evaluation of information without resorting to conspiratorial explanations.
Protection from manipulation doesn't begin with searching for hidden enemies, but with understanding how the systems we use every day work. This requires media literacy, critical thinking, and willingness to acknowledge the complexity of reality instead of seeking simple answers.
