Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /Conspiracy Theories
  3. /Cults and Control
  4. /Mind Control
  5. /Dead Internet Theory: Why Millions Belie...
📁 Mind Control
⚠️Ambiguous / Hypothesis

Dead Internet Theory: Why Millions Believe Bots Have Taken Over the Web — and What's Really Happening

The Dead Internet Theory claims that most online activity consists of bots and AI agents rather than real people. This conspiratorial idea is gaining traction amid genuine problems: massive bot farms, disinformation campaigns, and AI-generated content. We examine where fact ends and paranoia begins, what data confirms social media manipulation, and why the theory itself is a lens through which to view the modern internet—but not an absolute truth.

🔄
UPD: February 8, 2026
📅
Published: February 5, 2026
⏱️
Reading time: 10 min

Neural Analysis

Neural Analysis
  • Topic: Dead Internet Theory — a conspiratorial claim that the internet is filled with AI bots rather than real people
  • Epistemic Status: Moderate confidence — the theory itself is unconfirmed, but individual elements (bot farms, disinformation) have strong evidentiary basis
  • Evidence Level: Observational studies, social media analysis, documented bot farm cases (S001, S002); systematic reviews of the theory itself are absent
  • Verdict: Dead Internet Theory in its radical form (majority of content is bots) is not supported by data. However, reality is more concerning: social networks are indeed manipulated by bots to spread disinformation, armies of fake accounts are created, AI-generated content is growing exponentially. The theory is a useful metaphor for understanding the degradation of online space quality, but not literal truth.
  • Key Anomaly: Concept substitution: from "bots influence discourse" to "the internet is dead and controlled by AI." A logical leap without evidence of scale.
  • 30-Second Check: Open any social network, find a post with high engagement — check commenters' profiles: is there history, photos, activity? Bots typically have empty profiles or repetitive content.
Level1
XP0
🖤
Imagine: you're scrolling through your social media feed, liking posts, leaving comments—and suddenly you realize that most of the accounts around you aren't people, but bots. The Dead Internet Theory claims exactly this: the web has long been taken over by artificial intelligence, and living users have become vanishingly rare. It sounds like a dystopian plot, but millions of people worldwide are seriously discussing this idea—and there are reasons for that. Let's explore where conspiracy ends and real data about bot farms, disinformation, and AI-generated content begins—content that's already shaping our perception of reality.

📌What is the Dead Internet Theory—and why it went viral in the 2020s

The Dead Internet Theory is a conspiratorial hypothesis claiming that the overwhelming majority of online activity is generated by bots and AI algorithms, not living people. Real users supposedly make up only a small fraction of traffic, while corporations and governments use automated systems to manipulate opinion and control the information space (S001).

⚠️ Key claims of the theory

Proponents point to specific signs: identical comments under popular posts, accounts with minimal history that suddenly generate content, synchronized likes from thousands of profiles. Examples like "shrimp Jesus"—absurd AI-generated images with millions of views—are seen as evidence of a long-term strategy (S001).

While creating a convincing fake account once required significant resources, now a single person can control thousands of bots generating unique content through generative AI.

According to the theory, these accounts first build audiences with harmless content, then get weaponized to spread disinformation, political propaganda, or commercial manipulation. More details in the Conspiracy section.

🧩 Why the theory resonates

Its popularity stems not only from conspiratorial thinking, but from real observations: discussion quality is declining, algorithms show strange content, distinguishing profiles from bots is getting harder. Add to this documented cases of bot farms, election interference through fake accounts, and data breach scandals.

Generative AI as catalyst
ChatGPT, Midjourney and similar tools enable creating convincing text, images and video in seconds, blurring the line between "living" and "dead" internet.

🔎 Historical context

The idea that the internet is populated by more than just people isn't new. In the 2000s, "trolls" and "shills" were discussed—real people acting on commission. The turning point came in the 2010s with large-scale evidence of automated systems: "troll factories," bots influencing elections.

The formulation "Dead Internet Theory" emerged around 2016–2021 on anonymous imageboards like 4chan. By 2023–2024, the theory moved beyond fringe platforms and became a topic of discussion in mainstream media and academic circles—partly because some of its elements turned out to be eerily close to reality.

Visualization of bot networks and automated accounts in social media
Schematic representation of what a modern bot farm looks like: central control nodes, thousands of connected accounts, and coordinated activity patterns that can be detected through metadata analysis

🧱Steel Version of the Arguments: Five Most Compelling Cases for the Dead Internet Theory

Before critically examining the theory, it's necessary to present it in its strongest form — the so-called "steelman argument". This is an intellectually honest approach: first strengthen the opponent's position, then analyze it. More details in the Financial Scams section.

Below are the five most substantial arguments from dead internet theory proponents that genuinely deserve serious consideration.

📊 First Argument: Traffic Statistics Show Anomalous Bot Growth

Cybersecurity research in recent years demonstrates that a significant portion of internet traffic is generated by automated systems. According to various analytics firms, between 30% and 50% of all web traffic comes from bots — and not all of them are "good" bots (search crawlers, monitoring systems).

A substantial portion consists of malicious bots, scrapers, spam bots, and metric manipulation systems. On social networks, the situation is even more alarming: scandals periodically emerge revealing that major accounts have significant portions of fake profile followers.

Platform Official Estimate Independent Research
Twitter (X), 2022 ~5% bots 15–20% bots
Facebook Removes billions of fake accounts annually Scale of undetected accounts unknown

If platforms are removing billions of accounts, how many more remain undetected?

🕳️ Second Argument: Content and Discussion Quality Is Degrading Exponentially

Long-term internet users note a persistent feeling: discussion quality is declining, original content is becoming scarcer, and algorithms increasingly show repetitive, templated, or outright meaningless material.

Comments under popular posts often look like collections of clichés, emojis, and shallow reactions lacking depth. Forums and communities that were once vibrant are turning into echo chambers with predictable behavioral patterns.

Theory proponents argue: this isn't simply the result of "eternal September" (the phenomenon where an influx of new users lowers the average discussion level), but rather a consequence of a significant portion of "participants" being bots trained to imitate human behavior.

AI-generated comments are becoming increasingly convincing, but they lack the genuine creativity, irony, and contextual understanding characteristic of authentic communication. The result — an internet that looks active but feels empty.

⚠️ Third Argument: Documented Cases of Mass Manipulation and Disinformation Campaigns

Compelling evidence already exists that social networks are manipulated by bots to influence public opinion through disinformation — and this has been happening for years (S001).

Scandals surrounding Cambridge Analytica, election interference in the US and Europe, operations to discredit political opponents — all of this is documented, investigated, and partially acknowledged by the platforms themselves. These cases demonstrate that the technology and infrastructure for mass creation of fake accounts exists and is actively used.

  • If such operations are possible and profitable, it's logical to assume their scale is far larger than we know.
  • Each exposed case is merely the tip of the iceberg.
  • The bulk of manipulation remains undetected.

🧬 Fourth Argument: Generative AI Has Made Creating Fake Content Trivially Simple

Before the emergence of GPT-3, GPT-4, Midjourney, and similar systems, creating convincing fake content required significant resources: copywriters, designers, time to create unique texts and images.

Now a single person with API access can generate thousands of unique posts, comments, images, and even videos in an hour that will appear to be created by different people. This has radically changed the economics of bot farms: scaling used to be expensive — now it's nearly free.

Modern language models can imitate the style, tone, and even idiosyncrasies of specific users, making bot detection an increasingly complex task.

If the technology enables creating indistinguishable-from-human content at industrial scale, it's reasonable to assume this is already happening — and in far greater volumes than we realize.

🔁 Fifth Argument: Platforms' Economic Incentives Encourage Artificial Activity

The business model of most social platforms is based on engagement metrics: the more users, views, likes, and comments, the higher the company's valuation and advertising revenue.

This creates a perverse incentive: platforms benefit from inflating activity indicators, even if part of that activity is bot-generated. Removing fake accounts reduces metrics, which negatively impacts stock prices and attractiveness to advertisers.

  1. Recommendation algorithms are optimized to maximize time spent on the platform, not content quality.
  2. If bot-generated content retains user attention, the algorithm will promote it.
  3. This creates a feedback loop: bots generate content → algorithms amplify it → users engage → data trains new bots.
  4. In such a system, the boundary between "living" and "dead" internet genuinely blurs.

🔬Evidence Base: What Research Says About the Real State of the Internet

From arguments to facts. Independent research, academic work, and platform data reveal the true scale of automation. We need to separate documented phenomena from speculation. More details in the 5G Fears section.

📊 Quantitative Data on Bots: From Traffic to Social Networks

Cybersecurity reports document high levels of automated traffic: 30% to 50% of all web traffic is generated by bots. But the structure is critical: a significant portion consists of legitimate bots (Google and Bing search crawlers, monitoring systems, availability checks).

Platform Official Estimate Independent Research
Twitter (2022) < 5% of active users 9–15%
Facebook (2023) 5–6% (1.5 billion removed) Similar scale
Instagram, TikTok Not disclosed precisely Comparable to Facebook

The gap between official and independent figures reflects methodological differences and platforms' incentives to minimize the problem.

🧪 Disinformation Campaigns: From Theory to Documented Operations

Social networks have been manipulated by bots to influence public opinion through disinformation for years (S001, S002). Case studies: Russian "troll factory" operations (2016 U.S. elections), anti-vaccine campaigns, attacks on journalists and activists, commercial manipulation.

The mechanism: sophisticated networks of accounts mimic organic behavior—posting neutral content, interacting with each other, building followers, then activating for targeted messaging (S001, S002). Entire armies of accounts can remain "dormant" for years before deployment.

This means disinformation infrastructure is built in advance as a long-term asset, not improvised as needed.

🧾 Visual Content Verification and Fact-Checking Challenges in the AI Era

The volume of claims requiring fact-checking exceeds what humans can manually process by several orders of magnitude (S006). Visual content is more influential than text and naturally accompanies fake news.

Generative AI has exacerbated the problem: creating convincing fake images, videos, and audio is now accessible to any user. The "shrimp Jesus" phenomenon—absurd AI-generated images with millions of views—demonstrates how easily attention can be manipulated.

Behind the apparent harmlessness may lie a long-term strategy (S001, S002): accounts build audiences with viral content, then pivot to serious manipulation. This connects to the broader problem of conspiratorial thinking and its spread through algorithms.

🌐 Web Evolution and the Centralization Problem: From Web 2.0 to Web 3.0

Web architecture is constantly being reimagined to handle massive data volumes (S011). Web 3.0—a decentralized architecture that's more intelligent and secure—addresses web data ownership through distributed technologies.

Web 3.0 Criticism
Decentralization could worsen bot and disinformation problems: lack of central control makes moderation and removal of harmful content more difficult.
Web 3.0 Defense
Cryptographic identity verification and blockchain transparency will help distinguish real users from bots.

Both positions remain largely theoretical for now. Web 3.0 is not mature and remains contested (S011). The actual outcome depends on how challenges of scalability, energy consumption, and social coordination are resolved.

Process of detecting AI-generated content using machine learning
Multi-layered content analysis system for identifying AI-generated texts and images: from statistical anomalies to semantic patterns and metadata

🧠Mechanisms and Causal Relationships: Why the Internet Is Becoming More Automated

Understanding what's happening on the internet requires analyzing the mechanisms that lead to observable phenomena. More details in the section Psychology of Belief.

This isn't a conspiracy — it's a natural consequence of technological progress, market incentives, and tool accessibility.

⚙️ Attention Economy and Engagement Metrics as Automation Drivers

Social platforms operate within an attention economy: revenue depends on user time on platform and engagement metrics (likes, comments, shares). Algorithms are optimized for attention retention, even if achieved through provocative or absurd content.

In such a system, bots become economically viable: they generate activity, boost metrics, create the illusion of popularity. For brands, it's an investment in visibility that pays off through advertising. For platforms, removing bots means lowering metrics. The result is a system that structurally incentivizes artificial activity.

🧬 Technological Progress: From Primitive Scripts to Generative AI

Early 2000s bots were primitive and easily detected. Modern bots based on large language models generate unique, contextually relevant content that's virtually indistinguishable from human output (S001).

Generative AI has lowered the barrier to entry: one person can manage thousands of accounts, each generating unique content and adapting to context. This is a natural consequence of tool accessibility, not a coordinated conspiracy.

🔁 Feedback Loops: How Algorithms Amplify Automated Content

Recommendation algorithms create feedback loops that amplify certain types of content regardless of origin. If a bot-generated post receives high engagement, the algorithm interprets this as a quality signal and shows the post to more users.

Cycle Stage What Happens Result
Artificial Activity Bots create likes, comments, shares Post appears popular
Algorithmic Amplification System shows post to larger audience Organic interactions grow
Model Training Interaction data used to improve AI Next generation of bots becomes more convincing
Closed Loop Bots learn from humans, humans interact with bots Boundary between "real" and "artificial" blurs

📊 Scale of the Problem: Numbers and Reality

Research shows that bots constitute a significant portion of social media activity. However, this doesn't mean the internet is "dead" — it means it's transforming under the influence of economic incentives and technological capabilities.

The problem isn't the existence of bots, but their integration into an ecosystem where platforms benefit from their presence and users can't distinguish automated content from organic. This creates information asymmetry that undermines trust in the internet as an information source.

🎯 Social Effects: Why People Believe in Dead Internet Theory

The growing presence of bots and automated content creates a sense that the internet is becoming less authentic. People notice repeating patterns, template responses, lack of depth in discussions — and this observation is valid (S004).

However, interpretation of this phenomenon often shifts into conspiracy: instead of seeing a system of economic incentives and technological capabilities, people search for a hidden agent — the state, a corporation, an AI uprising. This is a simpler explanation than understanding complex interactions between algorithms, bots, and human behavior.

  1. Notice the change (content has become less authentic) — valid
  2. Explain it through conspiracy — cognitively easier than analyzing systemic factors
  3. Find "evidence" of conspiracy — confirmation bias in action
  4. Spread the theory — social validation reinforces belief

🔍 Distinguishing Between Fact and Interpretation

Fact: the proportion of automated content on the internet is growing. This is confirmed by research and observable in platform behavior.

Interpretation: this is the result of a conscious plan to capture the internet by AI or the state. This is an assumption that goes beyond available data and requires belief in a coordinated conspiracy.

The mechanisms of internet automation aren't a mystery, but an open system of economic incentives, technological capabilities, and algorithmic feedback loops. Understanding these mechanisms allows critical evaluation of information without resorting to conspiratorial explanations.

Protection from manipulation doesn't begin with searching for hidden enemies, but with understanding how the systems we use every day work. This requires media literacy, critical thinking, and willingness to acknowledge the complexity of reality instead of seeking simple answers.

⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The dead internet theory relies on real phenomena, but our criticism may underestimate the scale of the problem or ignore methodological blind spots. Here's where the argumentation requires reconsideration.

Underestimating the Scale of the Problem

The article may be too cautious in assessing bot prevalence. Some studies suggest that up to 50% of internet traffic is generated by bots, whereas our position that "bots are significant but not dominant" may become outdated within a year or two at the current pace of generative AI development.

Blurring the Boundary Between Human and Bot

We criticize the radical version of the theory but fail to account for the fact that the boundary between "human" and "bot" content itself is blurring. If a person uses ChatGPT to write a comment or an AI assistant writes on behalf of a user, who is actually the author? Our criticism may be based on an outdated binary model.

Information Asymmetry About Closed Platforms

The main sources are public analyses and academic works, but we lack access to internal data from Facebook, Twitter/X, TikTok about the real proportion of bots. Platforms have an interest in underreporting these figures, so our assessment may be too optimistic.

Disproportionate Influence of the Minority

Even if bots constitute a minority, their influence on user perception can be disproportionately large. If 10% of bots create 50% of toxic content, then the subjective feeling of a "dead internet" may be a valid psychological experience that we underestimate by focusing on statistics.

Exponential Obsolescence of Conclusions

Sources are dated prior to the mass adoption of GPT-4, Claude 3, Gemini, and other advanced LLMs. The situation is changing exponentially, and what was true in 2023–2024 may become irrelevant in 2025–2026. The article risks becoming quickly outdated without regular updates.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

Dead Internet Theory is a conspiratorial claim that the majority of activity and content on the internet, including social media accounts, is created and automated by artificial intelligence and bots rather than real people. According to this theory, actual users constitute a minority, while the bulk of posts, comments, and interactions are generated by algorithms. The theory originated in underground internet communities and gained traction amid the rise of AI-generated content and documented cases of bot farms (S001, S002).
Partially true, but not to the extent claimed by the radical version of the theory. There is compelling evidence that social networks are manipulated by bots to spread disinformation — and this has been happening for years (S001, S002). Entire armies of fake accounts are created to influence public opinion. However, the claim that the majority of the internet consists of bots is not supported by data. Reality: bots constitute a significant but not dominant portion of activity, and their influence is concentrated in specific contexts (political campaigns, spam, metric manipulation).
There is no direct evidence for the radical version of the theory. However, there is strong evidence for individual elements: documented cases of bot farms used for disinformation on social networks (S001, S002); the rise of AI-generated content (text, images, video); phenomena like "shrimp Jesus" — mass distribution of absurd AI content to boost engagement; research shows that the volume of claims requiring fact-checking exceeds manual verification capacity by orders of magnitude (S006). These data confirm content quality degradation and manipulation, but do not prove the internet is "dead."
Because it explains real feelings of online space degradation through a simple, frightening narrative framework. Users notice: homogeneous content, toxic comments, spam, declining discussion quality, algorithmic filtering that creates echo chambers. The theory provides an explanation: "these aren't people, they're bots." Cognitive trap: pattern-matching (the brain seeks patterns) + confirmation bias — every instance of strange behavior is perceived as proof. Moreover, real scandals involving bot farms and disinformation (S001, S002) lend the theory plausibility.
Shrimp Jesus is a viral phenomenon of AI-generated images depicting Jesus Christ as a shrimp or surrounded by shrimp, which were massively distributed on social networks. These absurd images were created by bots to boost engagement (likes, comments, shares). While the phenomenon seems harmless and strange, it conceals a long-term strategy: building armies of accounts with high engagement that can later be used to spread disinformation or be sold (S001, S002). This is an example of how AI content is used for manipulation, fueling the Dead Internet Theory.
Very dangerous, and this is confirmed by research. Bots are used to manipulate public opinion in political campaigns, spread fake news, create the illusion of consensus or polarization. Visual content (images, video) is more influential than text and naturally accompanies fake news (S006). The volume of claims requiring verification exceeds manual fact-checking capacity by orders of magnitude (S006). This creates an information environment where falsehoods spread faster than truth and trust in institutions is undermined. Long-term effect: erosion of society's epistemic security.
Web 3.0 is the next generation of internet architecture, decentralized, more intelligent and secure, based on distributed technologies and capable of addressing data ownership issues (S011). The idea: return data control to users, reduce dependence on centralized platforms (which often fail to moderate bots effectively). However, Web 3.0 is not yet a mature technology and remains controversial (S011). Connection to the bot problem: decentralization could complicate mass manipulation, but may also create new attack vectors if effective identity and content verification mechanisms are not developed.
Yes, and it is gaining momentum. Digital humanism is a philosophical and practical movement calling for humane use of digital technologies, especially AI. It emerged amid growing dissatisfaction with how digitalization is being deployed globally, and is being articulated by former heroes of internet companies, social networks, and search engines (S005). Philosophical and industrial thought leaders are beginning to call for humane use of digital tools (S005). However, the term still lacks a clear conceptual and philosophical foundation and requires clarification (S005). This is an attempt to rethink the relationship between humans and technology in the age of AI.
Yes, but it's becoming increasingly difficult. Classic bot indicators: empty or generic profiles, absence of activity history, mass posting of identical content, inhuman response speeds, use of stock photos or AI-generated avatars. However, modern bots are becoming more sophisticated: they use stolen photos of real people, mimic natural behavior patterns, generate unique content using language models. Reliable identification requires specialized analytical tools (metadata verification, network activity analysis, AI text detection). Average users should remain skeptical of accounts with suspicious characteristics.
Don't engage, verify, report. Specific protocol: (1) Don't like, comment, or share — this increases reach. (2) Check the source: who is the author, do they have a history, is the information confirmed by independent sources. (3) Use fact-checking tools (Google Reverse Image Search for images, verification through Snopes, FactCheck.org, etc.). (4) Report suspicious accounts or content through platform mechanisms. (5) Develop media literacy: learn to recognize manipulative techniques, emotional triggers, logical fallacies. (6) Remember: if content provokes a strong emotional reaction (anger, fear, outrage) — that's a red flag requiring additional verification.
It's a useful metaphor, but not literal truth. The radical version of the theory (the internet is completely overrun by bots) isn't supported by data. However, the theory points to real problems: massive bot farms, disinformation campaigns, content quality degradation, algorithmic manipulation. As sources note, it's an interesting lens for examining the internet (S001, S002). The truth is more sinister: not that the internet is dead, but that living space is being systematically poisoned by manipulation, and the boundaries between human and machine content are blurring. The theory is a symptom of a crisis of trust in the digital environment.
Web architecture is constantly being reconsidered and updated to leverage the advantages of massive volumes of data and information (S011). The next generation of web evolution (Web 3.0) is already taking shape and influencing our lives (S011). Web 3.0 is a decentralized architecture, more intelligent and secure, capable of addressing web data ownership issues based on distributed technologies (S011). However, this technology is not yet mature and remains controversial (S011). In parallel, content verification tools, AI detectors, reputation systems, and cryptographic methods for confirming authorship are being developed. The question is: will these solutions keep pace with the growth of the problem?
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
// SOURCES
[01] The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister[02] Digital Humanism[03] Embedding the Refugee Experience: Forced Migration and Social Networks in Dar es Salaam, Tanzania[04] Mind the gap : a critique of human/technology analogies in artificial agents discourse[05] MashUp at the Vancouver Art Gallery: “In Review” [onto]Riffologically[06] Technology and ontology in electronic music : Mego 1994-present[07] Restoring Ethical Gumption in the Corporation: A Federalist Paper on Corporate Governance - Restoration of Active Virture in the Corpaorate Sturcture to Curb the YeeHaw Culture in Organizations[08] From synthespian to convergence character : reframing the digital human in contemporary Hollywood cinema

💬Comments(0)

💭

No comments yet