Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /Critical Thinking
  3. /Mental Errors
  4. /Cognitive Biases
  5. /The Dead Internet Theory: How AI Bots Tu...
📁 Cognitive Biases
⚠️Ambiguous / Hypothesis

The Dead Internet Theory: How AI Bots Turned the Web into an Illusion Factory — and Why It's More Dangerous Than It Seems

The Dead Internet Theory claims that most online activity is generated by AI bots rather than humans. While the literal version of the theory is conspiratorial, reality proves more disturbing: mass bot deployment for public opinion manipulation through disinformation is documented. The "Shrimp Jesus" phenomenon and armies of fake accounts demonstrate how AI agents construct parallel realities in social media. We examine the mechanics of digital deception, evidence quality, and self-verification protocols.

📅
Published: February 26, 2026
⏱️
Reading time: 13 min

Neural Analysis

Neural Analysis
  • Topic: Dead Internet Theory and the real impact of AI bots on information reliability online
  • Epistemic status: Moderate confidence — conspiracy version debunked, but bot manipulation is documented
  • Evidence level: Observational studies, social media analysis, documented disinformation campaigns
  • Verdict: The literal theory of a fully AI-controlled internet is a myth. However, mass use of bots to create content and manipulate opinions is a proven fact that poses a serious threat to the information ecosystem.
  • Key anomaly: Concept substitution: from "the entire internet is dead" to "bots influence perception of reality" — the latter is true and dangerous
  • Check in 30 sec: Find an account in your feed with religious AI art and check creation date + post history
Level1
XP0
🖤
Imagine: you're scrolling through your social media feed, liking posts, arguing in comments—and you have no idea that half your conversation partners stopped being human long ago. The Dead Internet Theory claims that most online activity is generated by AI bots, and real users have become a minority in a digital world controlled by algorithms. Sound like paranoid conspiracy theory? Maybe. But reality turns out even more disturbing: proven disinformation campaigns, armies of fake accounts, and the "Shrimp Jesus" phenomenon show that the line between human and bot has blurred beyond recognition. 👁️ We're breaking down the mechanics of digital deception, the level of evidence, and a self-verification protocol—because in a world where AI creates parallel realities, the ability to distinguish truth from illusion becomes a survival skill.

📌What is the Dead Internet Theory—and why it has captured the minds of millions of users worldwide

The Dead Internet Theory claims that activity and content online are predominantly created by AI agents, not living people (S001). Real users supposedly constitute only a small fraction of traffic, with the rest being bots and algorithms mimicking human behavior.

The theory originated in anonymous forums and conspiracy communities, where users noticed strange patterns: repetitive comments, identical posts from different accounts, suspicious activity around certain topics. These observations crystallized into a concept according to which the internet has transformed into a factory of illusions, controlled by AI and corporations. More details in the Logic and Probability section.

Key Claim 1
Most content on social media is generated by bots, not people.
Key Claim 2
Bots are used to manipulate opinion, promote narratives, and create the illusion of consensus.
Key Claim 3
Real users interact with AI agents without realizing it.
Key Claim 4
The situation is the result of a deliberate strategy by corporations and governments to control the information space.

The Dead Internet Theory in its literal form is a conspiratorial concept without rigorous scientific proof. However, it serves as a lens for examining real processes online (S001).

The boundary between legitimate concerns about bots and disinformation and paranoid fantasies about total AI control is often blurred. This makes the theory simultaneously provocative and demanding of critical analysis.

Why has this theory captured millions of minds? It appeals to the availability heuristic—we notice bots because they really do exist, and we extrapolate isolated observations to the entire internet. It also resonates with confirmation bias: supporters of the theory search for and find "evidence" everywhere they look.

Visualization of bot network in social media with connection nodes
Schematic representation of a bot network mimicking human activity on social media—each node represents a fake account connected to a central controlling algorithm

🧱Steelman Arguments: Seven Most Compelling Cases for the Dead Internet Theory

To honestly evaluate the dead internet theory, we must examine its strongest arguments in their most persuasive form. The steelman method requires presenting an opponent's position in its strongest possible form before subjecting it to critical analysis. Learn more in the Critical Thinking section.

🔁 First Argument: Exponential Growth of Automated Content

The volume of AI-generated content is growing exponentially. With the emergence of large language models like GPT-3, GPT-4 and their equivalents, creating text, images, and video has become accessible for mass use.

The technical barrier to creating bots has virtually disappeared: anyone with minimal programming skills can launch an army of accounts generating content around the clock.

🧩 Second Argument: The "Shrimp Jesus" Phenomenon as Evidence of Mass AI Bot Usage

One of the most striking examples is the "shrimp Jesus" phenomenon, when social networks were flooded with AI-generated images of religious figures created from seafood. These strange, surrealistic images spread through networks of fake accounts, collecting thousands of likes and comments.

Behind this seemingly harmless phenomenon potentially lies a long-term strategy (S001) — a demonstration that somewhere an army of accounts (S001) is being created, capable of coordinating content distribution.

📊 Third Argument: Documented Disinformation Campaigns Using Bots

There is compelling evidence that social media is manipulated by bots to influence public opinion through disinformation — and this has been happening for many years (S001).

Numerous studies have documented campaigns in which thousands of fake accounts coordinated to spread false information, influence elections, and inflame social conflicts. This is not a conspiracy theory but a proven fact, acknowledged by governments and research organizations worldwide.

Scale of the Problem Indicators
Documented campaigns Coordinated spread of false information, election interference, conflict incitement
Economic incentive Platforms are interested in inflating activity metrics to attract advertisers
Technological capability Creating AI agents indistinguishable from humans has become technically accessible

⚙️ Fourth Argument: Platforms' Economic Motivation to Hide the Real Proportion of Bots

Social networks have a direct financial interest in inflating user activity metrics. Advertisers pay for reach and engagement, meaning platforms benefit from creating the illusion of a large active audience, even if a significant portion consists of bots.

Companies systematically underestimate the proportion of bots in their reports, while attempts by independent researchers to obtain real data encounter opacity and denial of access to information.

🧠 Fifth Argument: Changing Nature of Online Discussions

Many users note that the character of internet communication has radically changed in recent years. Discussions have become more polarized, aggressive, and superficial.

Arguments repeat with suspicious regularity, as if copied from a single source. Theory proponents see this as evidence that a significant portion of "participants" are bots programmed to inflame conflicts and promote certain narratives. This relates to the broader phenomenon of confirmation bias and echo chambers, where algorithms amplify polarization.

🕳️ Sixth Argument: Technological Capability to Create Indistinguishable AI Agents

Modern AI technologies have reached a level where creating agents indistinguishable from humans in text communication has become technically possible. Large language models can generate coherent, contextually appropriate responses, simulate emotions, and maintain extended dialogues.

If the technology exists and is accessible, it's logical to assume it's being actively used — especially given the economic and political incentives for such use.

🔁 Seventh Argument: The "Dead Mall" Effect in Online Communities

Many users describe the feeling that the internet has become like an abandoned shopping mall: formally it functions, storefronts are lit, but there are almost no real people, and most "visitors" are mannequins or actors simulating activity.

Old forums and communities, once full of lively discussions, are now filled with spam and automated posts. New platforms seem artificial from the start, lacking genuine human energy. This is a subjective feeling, but it's shared by enough users to require explanation.

  1. Check: are there signs of repeating argumentation patterns in discussions
  2. Compare: quality of dialogue in forum archives (2010–2015) with current state
  3. Assess: proportion of accounts with minimal activity history and strange behavioral patterns
  4. Analyze: speed of content distribution and time zones of activity

🔬Evidence Base: What Scientific Research Says About the Real Scale of Internet Bots

Let's move from arguments to facts. More details in the section Statistics and Probability Theory.

📊 Documented Disinformation Campaigns: From Elections to Pandemics

The most compelling evidence concerns targeted disinformation campaigns. Social media manipulation by bots to influence public opinion through disinformation has been documented for many years (S001).

Research has documented the use of bots to influence elections in various countries, spread false information about vaccines during the COVID-19 pandemic, and incite ethnic and religious conflicts. These campaigns are characterized by coordinated behavior of thousands of accounts spreading identical or slightly modified messages.

  1. Targeted election campaigns in different countries
  2. Healthcare disinformation (vaccines, pandemic)
  3. Incitement of social conflicts (ethnic, religious)
  4. Coordinated behavior of thousands of accounts

🧪 Methodological Problems in Estimating Bot Prevalence: Why Accurate Numbers Are Hard to Get

The main problem in assessing the scale is the methodological difficulty of defining what counts as a bot. There's a spectrum from primitive spam bots, easily detected by automated systems, to sophisticated AI agents practically indistinguishable from humans.

Social media platforms don't disclose detailed information about bot detection methods, fearing this would help bot creators circumvent defenses. Independent researchers face limited data access.

This makes obtaining accurate estimates extremely difficult and creates an information vacuum filled by speculation.

🧾 Digital Deception and Cybersecurity: How AI Is Changing the Threat Landscape

Cybersecurity research shows that AI is transforming the digital threat landscape (S003). Generative models enable creation of more convincing phishing messages, automation of social engineering, and generation of fake documents and images.

This confirms that the technological foundation for mass creation of convincing fake accounts and content exists and is actively developing. The threat isn't hypothetical—it's materialized in tools.

🔎 The "Shrimp Jesus" Phenomenon: Analysis of a Specific Case of Mass Bot Activity

The "Shrimp Jesus" phenomenon is a documented example of coordinated bot activity. While it may seem harmless, it potentially represents a long-term strategy (S001).

Analysis shows the images spread through a network of linked accounts, many displaying signs of automated behavior: simultaneous creation, identical activity patterns, absence of genuine human individuality markers. This means somewhere an army of accounts is being created (S001), capable of coordinated action toward unclear objectives.

🧬 Generative AI and Information Retrieval: How the Content Ecosystem Is Changing

Research in generative information retrieval shows fundamental changes in how content is created and consumed online (S002). Generative models don't just find existing information but create new content based on user queries.

This blurs the boundary between "found" and "created" content, between human and machine authorship. Looking ahead, a significant portion of content users interact with will be generated by AI in real-time rather than created by humans in advance.

The effect amplifies when AI-generated content becomes training data for the next generation of models—creating a self-reinforcing cycle.

Diagram of disinformation spread through bot networks
Visualization of the disinformation spread process: from source through bot networks to real users, creating the illusion of organic viral distribution

🧠The Deception Mechanism: How AI Bots Create the Illusion of Authenticity and Why Our Brains Cannot Detect Them

Bots are effective at deception not through technological sophistication alone. Our brains evolved for interacting with humans in the physical world, not for distinguishing between human and machine in digital spaces. More details in the Logical Fallacies section.

🧬 Social Trust Heuristic: Why We Default to Assuming Our Conversation Partner Is Human

Humans use a social trust heuristic: we assume by default that we're communicating with a human unless there are clear signs to the contrary. In our evolutionary environment, this was adaptive—all conversation partners were indeed human.

In digital environments, this heuristic becomes a vulnerability. We apply the same trust rules to social media accounts without realizing that a significant portion may be automated.

🔁 The Uncanny Valley Effect and Its Overcoming by Modern AI

The "uncanny valley" describes the discomfort when interacting with almost-but-not-quite-human objects. Early bots were easily detected: speech was mechanical, responses unnatural.

Modern large language models have overcome the uncanny valley in text-based communication (S002). Their responses are natural enough, contextually appropriate, and emotionally nuanced to avoid raising suspicion.

🧩 Cognitive Load and Reduced Vigilance

Verifying the authenticity of every conversation partner requires significant cognitive resources. Under conditions of information overload, people cannot maintain a high level of vigilance constantly.

We switch to automatic processing mode, relying on superficial signals and heuristics. This is where bots are most effective: they exploit our cognitive fatigue and tendency toward quick judgments.

  1. Information overload reduces critical perception
  2. Automatic information processing activates superficial signals
  3. Bots use these signals to imitate human behavior
  4. Result: illusion of authenticity without active verification

⚙️ Authentication Technologies and Their Limitations

Existing authentication methods face serious limitations in the era of generative AI (S008). Traditional key-based approaches require computational power and energy that IoT devices often lack.

Alternative schemes have problems with robustness under channel fluctuations and coordination overhead (S008). The absence of a clear secure distance for generating authentication keys creates new vulnerabilities that bots can exploit.

The connection to echo chambers exacerbates the problem: when bots create the appearance of consensus, users lose motivation to verify sources and content authenticity.

🎯 Why the Brain Cannot Detect Deception

Our face and voice recognition systems evolved for the physical world. In text-based communication, these systems are disabled, and we rely on linguistic and behavioral signals.

Bots have learned to imitate these signals well enough. They exploit base rate neglect—we don't account for the fact that most accounts online are still human, and therefore don't expect to encounter a bot. This expectation becomes a blind spot.

Authenticity Signal How Bots Imitate It Why the Brain Believes
Natural language LLM generates contextually appropriate text No mechanical errors that the brain expects from machines
Emotional coloring Model adds emojis, exclamations, personal stories Emotions are perceived as markers of humanity
Unpredictability Stochastic generation creates variability Variability is associated with living thought
Social trust Bot participates in discussions, receives likes Social approval reinforces trust

The problem runs deeper than technology. This is a collision between evolutionary psychology and digital environments, where old rules of trust no longer work under conditions of information control.

🧷Conflicts and Uncertainties: Where Sources Diverge and What Remains Questionable

Honest analysis requires acknowledging areas of uncertainty and contradiction in available data. Not all researchers agree on the scale of the problem, and significant gaps exist in understanding the long-term consequences of mass bot deployment. More details in the Psychology of Belief section.

🔎 Bot Proportion Estimates: Range from 5% to 50% Depending on Methodology

Different studies provide radically different estimates of bot prevalence on social networks—from conservative 5–10% to alarming 40–50%.

This variance stems from methodological differences: what exactly counts as a bot, which platforms are analyzed, what time periods are examined. Platforms typically provide the lowest estimates, independent researchers—higher ones.

  1. Definition: are only fully automated accounts counted, or semi-automated ones as well?
  2. Scope: are all platforms analyzed or only major ones (Twitter, Facebook)?
  3. Time period: is activity measured over a day, month, or year?
  4. Data source: does the researcher use public APIs or private platform data?

The truth likely lies somewhere in the middle, but precise determination remains problematic.

🧪 Causality vs Correlation: Declining Discussion Quality and Rising Bot Numbers

The observed decline in online discussion quality correlates with rising bot numbers, but this doesn't necessarily indicate a causal relationship (S001).

Alternative explanations are possible: general societal polarization, changes in platform algorithms, user fatigue from social media, demographic shifts in audience composition. Bots may be one factor, but not the only one and possibly not the primary one.

The problem is compounded by confirmation bias causing researchers to see bots everywhere they look for them. If you expect to find bots, you'll find them—even if they're just people who write similarly.

🧾 Long-Term Effects: Unstudied Consequences of Living in an Environment with AI Agents

Even if the dead internet theory is literally incorrect, it represents an interesting lens for examining the internet (S005).

The long-term psychological and social consequences of living in an environment where a significant portion of interactions occur with AI agents rather than humans remain unstudied.

Question Research Status Why It Matters
How does this affect social skill development? No longitudinal data A generation raised with bots may lose the ability for genuine dialogue
Does trust in information change? Preliminary observations, no conclusions If people can't distinguish humans from bots, they lose credibility criteria
Does critical thinking degrade? Indirect indicators, no direct studies Related to availability heuristic and groupthink

These questions require long-term longitudinal studies that don't yet exist. We're in a situation where the scale of the problem is growing faster than our ability to study it.

⚠️Cognitive Anatomy of the Myth: Which Psychological Triggers Make Dead Internet Theory So Convincing

Dead Internet Theory exploits several powerful cognitive biases and psychological triggers, which explains its popularity even in the absence of rigorous evidence. More details in the Quantum Mystification section.

🧩 Clustering Illusion Effect: Seeing Patterns Where None Exist

The human brain is evolutionarily wired to seek patterns — this helped our ancestors survive by noticing signs of danger or opportunity. But this ability has a side effect: we tend to see patterns even in random data.

Repetitive comments, similar accounts, identical posts may be the result of chance, people copying each other, or simply the limitations of human creativity — but our brain interprets this as evidence of coordinated bot activity (S001).

🕳️ Confirmation Bias: How We Find What We're Looking For

Confirmation bias causes us to pay attention to information that confirms our existing beliefs and ignore contradictory evidence. If someone is already inclined to believe in Dead Internet Theory, they'll notice every suspicious account, every strange comment, every repetitive message — and interpret it as confirmation of the theory.

Meanwhile, thousands of normal, clearly human interactions go unnoticed because they don't fit the narrative.

🧠 Hostile Media Effect and Distrust of Platforms

Growing distrust of social networks and their algorithms creates fertile ground for Dead Internet Theory. People already know that platforms manipulate content, hide information, and sell data — these are facts confirmed by investigations and leaks.

Dead Internet Theory offers an explanation: if platforms are already lying, why wouldn't they fill the network with bots? The logic seems irrefutable, though it's a false dichotomy — algorithmic manipulation and mass bot infiltration are different phenomena with different scales.

🎯 Availability Heuristic: What's Visible Feels Real

The availability heuristic causes us to overestimate the probability of events that are easy to recall or frequently appear in our field of vision. If posts about bots frequently appear in feeds, if people discuss dead internet in comments, if videos on this topic get millions of views — this creates the impression that the problem is massive and ubiquitous.

  1. The brain notices repeated mentions of bots in media
  2. Interprets frequency of mentions as an indicator of actual scale
  3. Ignores that the theory's popularity itself may be the reason for its visibility
  4. Concludes: the internet really is dead

👥 Groupthink and Social Validation

Groupthink amplifies the effect. When Dead Internet Theory becomes popular in certain communities, people start believing it not because they found convincing evidence, but because in their social environment it's considered truth (S005).

Criticism of the theory is perceived as naivety or an attempt to hide the truth. Supporters of the theory receive social recognition, a sense of belonging to a group of "enlightened" people who see what's hidden from others.

⚡ Psychological Comfort of Uncertainty

Paradoxically, Dead Internet Theory brings psychological comfort. A world where the internet is filled with bots and illusions is a world where there's an explanation for chaos, where there's an enemy you can name, where there's meaning in apparent meaninglessness.

Psychological Trigger How It Works in Theory Context Why It's Dangerous
Pattern seeking We see bots in random data We create an enemy that doesn't exist
Confirmation bias We notice only evidence "for" We ignore counter-evidence
Distrust of institutions Platforms lie → therefore, anything's possible Logical leap from specific to general
Availability heuristic Frequently see mentions → seems massive We confuse idea popularity with its truth
Groupthink We believe because our community believes We disable critical thinking

This doesn't mean that people who believe in Dead Internet Theory are stupid or naive. It means their brain is working perfectly normally, but in conditions that exploit its natural limitations. Dead Internet Theory isn't a thinking error, it's an error of the environment in which that thinking functions.

⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The dead internet theory is often dismissed as conspiracy theory, but some aspects of the problem may be underestimated. Here's where the article's logic shows cracks.

Underestimating the Scale of the Problem

The article may downplay the real percentage of bots on the internet. Some studies suggest that up to 50% of traffic in certain segments of the internet is generated by bots — this is closer to the literal version of the dead internet theory than we acknowledge.

False Dichotomy of Conspiracy Theory and Reality

The opposition of "conspiracy theory vs real problem" may be artificial. Perhaps the dead internet theory is not a delusion, but a premature warning about a trend that hasn't yet reached critical mass, but is moving in that direction faster than we think.

Blind Spots in Bot Detection

We don't know how many bots remain undetected. If current detection methods only catch primitive bots, while advanced AI agents with GPT-4+ level text generation go unnoticed, the real picture may be much bleaker.

Economic Incentives for Scaling

Bot creation has become an industry with multi-billion dollar turnover. The economic benefit from opinion manipulation is so great that the problem will only worsen, regardless of countermeasures.

Technological Optimism in Verification Methods

The proposed verification methods (reverse image search, profile analysis) quickly become outdated. New generative models create unique images without artifacts, and bot profiles become indistinguishable from human ones — in a year, this advice may become useless.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

The Dead Internet Theory is a conspiratorial claim that most activity and content on the internet, including social media accounts, is created and automated by artificial intelligence rather than real people. According to this theory, human presence online has become a minority, while AI agents dominate the creation of posts, comments, and interactions. While the literal version of the theory is unsubstantiated, reality shows a significant presence of bots used for manipulating public opinion (S001).
No, this is an exaggeration and conspiracy theory. The internet is not completely controlled by AI bots, and human activity remains dominant. However, there is evidence of massive bot usage for content creation and manipulation on social media. The Dead Internet Theory presents an interesting lens for analyzing the modern web, but does not reflect literal reality (S001). The problem is not total AI control, but the targeted use of bots for disinformation.
The 'shrimp Jesus' phenomenon is an example of AI-generated religious content spread by bots on social media that appears harmless and absurd at first glance. However, this may be part of a long-term strategy: creating an army of fake accounts that first gain followers with innocuous content, then are used to spread disinformation or manipulate opinions (S001). This demonstrates how AI bots create the illusion of organic activity.
Yes, compelling evidence exists. Research shows that social media is actively manipulated by bots to spread disinformation and influence public opinion — and this has been happening for many years (S001). Bots are used to artificially amplify certain narratives, create the illusion of consensus, attack opponents, and spread false information. The scale of the problem is significant: entire armies of accounts are created for coordinated information operations.
Key bot indicators include: recent account creation date despite high post volume, uniform content (especially AI-generated images), absence of personal details and interactions, suspiciously high posting frequency, use of stock or generated profile photos, repetitive patterns in texts and posting times. Check account history: bots often start with innocuous content (memes, religious images) to gain followers, then switch to targeted messaging. Watch for the absence of unique human errors and overly 'perfect' grammar across multiple languages.
The theory became popular due to the real sense of degradation in online interaction quality and the growth of automated content. People notice increased spam, repetitive content, suspicious accounts, and declining 'humanity' online. The theory offers a simple explanation for a complex phenomenon, making it appealing. Additionally, real cases of massive bot usage for disinformation (S001) lend the theory an appearance of plausibility, though it exaggerates the scale of the problem to conspiratorial levels.
Major risks include: erosion of trust in online information, manipulation of public opinion and elections, spread of disinformation in critical situations (health, safety), creation of artificial consensus around false ideas, societal polarization through amplification of conflicting narratives, undermining of democratic processes. The long-term effect is the inability to distinguish real from artificial, which destroys the information ecology and makes people vulnerable to manipulation (S001, S003). This also creates 'information noise' in which credible sources are drowned out.
Physical authentication at the IoT device level uses unique characteristics of radio signals and communication channels to verify device authenticity without requiring cryptographic keys. This is an alternative to traditional authentication, especially important for devices with limited computational resources and battery life (S008). The method conceals seed information from attackers, enhancing security. However, the approach has limitations: low resilience to channel fluctuations, synchronization overhead, and lack of a clear 'safe distance' for key protection (S008). For combating social bots this is less applicable, but shows the direction of authentication development.
Not directly related, but both topics reflect a crisis in the digital ecology. Copyright problems in the AI era (such as Hachette v Internet Archive) show how technologies violate traditional social contracts (S005). Generative AI creates content using data without authors' consent, which parallels the problem of bots creating content without human participation. Both situations demonstrate how AI blurs boundaries between original and derivative, human and machine, which intensifies the sense of a 'dead' internet.
Generative AI is radically transforming the internet, creating a new era of information retrieval (GenIR) and interaction (S002, S006). AI generates text, images, video, and code at scales unavailable to humans, creating masses of synthetic content. In IoT ecosystems, generative AI opens new possibilities but also creates security and privacy challenges (S006). The problem is that this content is often indistinguishable from human-created content, blurring the boundaries of reality and creating grounds for disinformation. Generative AI is a tool that can be used for both creation and manipulation.
Quick verification protocol: 1) Check the source — click on the author's profile, look at account creation date and post history. 2) Search for the information across multiple independent sources — if you only find it in one place or see copy-paste duplicates, that's a red flag. 3) Notice the emotional tone — bots often use hyperbole and fear. 4) Check images through reverse search — AI-generated images often have artifacts (strange hands, text, symmetry). 5) Ask yourself: 'Who benefits from me believing this?' If information triggers a strong emotion and demands immediate action — stop and verify.
Immediate actions: 1) Don't interact with the content (likes and shares increase algorithmic visibility). 2) Use the platform's 'Report' function, selecting 'Spam' or 'Fake account' as the reason. 3) Block the account so it doesn't appear in your feed. 4) Warn others — if the bot is spreading disinformation, notify people in comments or direct messages to those interacting with it. 5) Document — take screenshots for potential investigation. Remember: platforms often respond slowly, so mass reports are more effective.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
// SOURCES
[01] The spread of low-credibility content by social bots[02] Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy[03] Artificial Intelligence and its Drastic Impact on E-Commerce Progress[04] The future of social media in marketing[05] Artificial Intelligence and the Spread of Mis- and Disinformation[06] The role of artificial intelligence in achieving the Sustainable Development Goals[07] Can Public Diplomacy Survive the Internet?: Bots, Echo Chambers, and Disinformation[08] AI could create a perfect storm of climate misinformation

💬Comments(0)

💭

No comments yet