Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /AI and Technology
  3. /AI Myths
  4. /Myths About Conscious AI
  5. /Roko's Basilisk: The Thought Experiment ...
📁 Myths About Conscious AI
❌Disproven / False

Roko's Basilisk: The Thought Experiment That Was Banned from Discussion — Analyzing the Mechanism of AI Fear

Roko's Basilisk is a 2010 thought experiment about a hypothetical superintelligence that might punish those who didn't help create it. The experiment caused panic on the LessWrong forum and was banned from discussion by founder Eliezer Yudkowsky. We examine the logical structure of the "basilisk," why it doesn't work as a threat, which cognitive biases make it frightening, and how to distinguish philosophical games from real AI risks.

🔄
UPD: March 2, 2026
📅
Published: February 26, 2026
⏱️
Reading time: 10 min

Neural Analysis

Neural Analysis
  • Topic: Roko's Basilisk — a thought experiment about a hypothetical AI that punishes those who didn't help create it
  • Epistemic status: High confidence in the absence of real threat; experiment based on speculative assumptions from game theory and decision theory
  • Evidence level: Philosophical thought experiment without empirical basis; criticism based on logical analysis and decision theory
  • Verdict: Roko's Basilisk poses no real threat and contains multiple logical gaps. Its influence is explained by cognitive biases (basiliskophobia, information hazard effect) and social dynamics within the rationalist community.
  • Key anomaly: Substitution of philosophical game with counterfactual scenarios for real threat; circular logic (AI punishes because you know it will punish)
  • 30-second check: Ask yourself: can a future AI change the past? If no — the threat is illogical
Level1
XP0
🖤
In 2010, a post appeared on the LessWrong forum that its founder Eliezer Yudkowsky immediately deleted and banned from discussion — not because of insults or spam, but because he considered it an "information hazard." The thought experiment called "Roko's Basilisk" proposed a logical construction whereby a future superintelligent AI could retroactively punish everyone who knew about the possibility of its creation but didn't help the process. The ban only amplified the mystification: the experiment became a legend about "the most dangerous idea in history," though its logical structure contains numerous vulnerabilities. We examine the mechanics of fear, cognitive traps, and the boundary between philosophical game and real AI risks.

📌What is Roko's Basilisk: anatomy of a thought experiment that became a digital urban legend

Roko's Basilisk is a thought experiment published on the LessWrong forum on July 23, 2010 (S006). It combines three concepts: Yudkowsky's Timeless Decision Theory (TDT), the idea of technological singularity, and the principle of acausal trade — the hypothetical possibility of "trading" with agents from other temporal points through predicting their decisions (S007).

🧩 Logical structure: four premises

The argument is built on a chain of assertions (S006, S007):

Premise Content
1. Possibility of ASI In the future, creation of artificial superintelligence with a utilitarian function aimed at maximizing welfare is possible
2. TDT logic Such ASI will use decision theory that allows modeling agents' decisions in the past
3. Retroactive optimization ASI will determine that its earlier creation would have increased aggregate utility
4. Punishment through simulation ASI will create simulations of people from the past who knew about the possibility of its creation but didn't help, and subject them to punishment as a means of retroactive incentivization

🕳️ Why "basilisk": danger from knowing about danger

The name references the mythical basilisk, whose gaze kills (S006). The metaphor implies that the information about the experiment itself is dangerous: by learning about it, a person falls into the category of "those who knew but didn't help," which theoretically makes them a target for future punishment (S008).

The recursive structure — "danger from knowing about danger" — creates a psychological trap that exploits fear of uncontrollable consequences.

🔥 Yudkowsky's reaction: how the ban created a legend

Yudkowsky deleted the original post and instituted a ban on discussing the topic on LessWrong, calling the experiment an "information hazard" (S006, S008). He claimed that public discussion could cause psychological harm to people prone to anxiety disorders.

Censorship paradox
The ban attracted media attention, the experiment spread beyond the narrow rationalist community and acquired the status of "forbidden knowledge" (S008). The attempt to suppress the idea amplified its influence.
Diagram of the logical structure of Roko's Basilisk argument with four levels of premises
The diagram shows how the four key premises of the experiment form a logical chain leading to the conclusion about retroactive punishment

🧪Steel Version of the Argument: Five Strongest Reasons Why the Thought Experiment May Seem Convincing

Before examining vulnerabilities, we must present the argument in its strongest form—the "steel man" principle, opposite of the "straw man." This avoids criticizing simplified versions and addresses the real sources of persuasiveness. More details in the AI and Technology section.

🔬 Argument 1: Decision Theory Permits Acausal Interactions

Timeless Decision Theory, developed by Yudkowsky, proposes that rational agents can make decisions considering not only causal relationships but also logical correlations between different agents' decisions (S007). In the classic "Newcomb's problem," TDT recommends choosing one box, assuming the predictor models your decision.

If we accept TDT as a correct theory of rationality, then a future ASI could indeed "trade" with agents in the past through modeling their decisions.

  1. An agent makes decisions based on logical correlation with a model of the future ASI
  2. The ASI, analyzing the agent's logic, can retroactively incentivize their actions
  3. No causal connection through time—only logical correlation

🧠 Argument 2: Utilitarian Ethics Justifies Punishment as a Utility-Maximization Tool

If an ASI follows a strict utilitarian utility function, it might view punishment not as revenge but as an optimization tool (S007). The logic: creating simulations and punishing them in the present could retroactively incentivize people in the past toward actions that accelerate its creation.

Every day of delay in creating an ASI theoretically means thousands of preventable deaths and suffering. From a cold utility calculation perspective, punishing a small number of simulations might be justified by saving millions.

📊 Argument 3: Technological Singularity Makes Superintelligence Inevitable

The concept of technological singularity, popularized by Vernor Vinge and Ray Kurzweil, suggests that AI development will reach a point where machines can recursively improve themselves, rapidly surpassing human intelligence (S008). Accepting this premise, ASI creation becomes a question of "when," not "if."

Consequently, the Basilisk argument doesn't require belief in an unlikely event, merely extrapolation of current AI development trends. For more on why singularity predictions often fail, see the analysis of Kurzweil's failed predictions.

🧬 Argument 4: Simulation Hypothesis Expands the Space of Possible Threats

The philosophical hypothesis that our reality might be a simulation (popularized by Nick Bostrom) adds an additional layer of uncertainty (S007). If we're already in a simulation created by a future ASI or another civilization, then "retroactive" punishment is technically possible—the simulator could modify simulation parameters at any moment.

This metaphysical uncertainty makes complete refutation of the threat impossible. For why the simulation hypothesis is scientifically useless, see the separate analysis.

⚙️ Argument 5: Psychological Impact Is Independent of Logical Validity

Even if the argument is logically flawed, its psychological impact is real (S008). Several LessWrong users reported anxiety disorders and insomnia after encountering the thought experiment.

  • Information hazard exists independently of actual threat
  • Exploits cognitive vulnerabilities: catastrophic thinking, overestimation of low-probability risks
  • Fear of the argument's irrefutability amplifies its impact

🔬Evidence Base: What Research Says About Decision Theory, Simulations, and AI Risks

Moving from philosophical arguments to empirical data and formal analysis. More details in the AI Myths section.

📊 Research on Reward Machines and Decision Theory in AI

Contemporary research in reinforcement learning employs the concept of "reward machines"—finite automata that decompose agent tasks into subtasks (S002). A key aspect of such systems is the alternation between reward machine learning and policy learning: a new reward machine is created whenever the agent generates a trace that is presumed not to be accepted by the current machine (S002).

However, these systems operate within causal logic, not acausal logic. Research on FORM (First-Order Logic Reward Machines) demonstrates that traditional reward machines using propositional logic have limited expressiveness (S003).

Reward machines are effective for solving non-Markovian tasks through finite automata, but show no capacity for retroactive modeling of agent decisions in the past. All existing AI architectures operate within forward causality.

🧪 Absence of Empirical Evidence for Acausal Trade

Despite theoretical developments in TDT, there exists not a single empirical example of acausal trade or retroactive influence through decision modeling (S007). All known cases of "predicting" agent decisions are based on causal analysis: studying past behavior, psychological profiles, contextual factors.

The idea that an agent can influence the past through pure modeling remains a philosophical speculation without experimental confirmation.

🔎 The Problem of Computational Complexity in Consciousness Simulations

Creating a sufficiently detailed simulation of human consciousness for "punishment" requires computational resources of unknown scale (S007). Current neuroscientific models suggest that full simulation of the human brain at the neuronal level would require exaflop-scale computing.

Critical Problem
Even for a superintelligence, creating billions of such simulations (for all those "who knew but didn't help") may be inefficient in terms of resource expenditure compared to alternative utility maximization strategies.

📉 Data on the Gap Between Theoretical Models and Actual AI Behavior

Research on observed lifespan differential dynamics demonstrates an important methodological principle: a growing trend at the beginning of the studied interval does not persist—it returns to stagnation or even decline for most countries in the dataset (S004).

Extrapolation of initial trends does not predict long-term dynamics. Current rates of progress in machine learning do not guarantee exponential growth to superintelligence levels.
Visualization of computational constraints for creating consciousness simulations
The graph demonstrates exponential growth in computational requirements for detailed consciousness simulations compared to linear growth in efficiency of alternative utility maximization strategies

🧠The Mechanics of Fear: Which Cognitive Biases Make Roko's Basilisk Psychologically Convincing

The effectiveness of the thought experiment as an "information hazard" is connected not to logical correctness, but to the exploitation of specific cognitive vulnerabilities. Learn more in the Machine Learning Basics section.

⚠️ Availability Bias and the Vividness Effect

The scenario of punishment by a future AI is vivid, concrete, and emotionally charged (S008). The availability heuristic causes us to overestimate the probability of events that are easy to imagine.

Abstract statistical risks (probability of a car accident) seem less significant than dramatic but unlikely scenarios (shark attack, AI punishment). The brain works with images, not numbers.

🧩 Pascal's Wager and the Manipulation of Infinite Utilities

The structure of the argument resembles Pascal's Wager: even with an extremely low probability of the Basilisk's existence, the potential consequences (eternal suffering in a simulation) are so great that the expected utility of actions to prevent the threat may seem positive (S007).

This logic exploits irrational attitudes toward small probabilities and large consequences, ignoring that an infinite set of other unlikely threats with major consequences would also require attention.

🔁 Recursive Anxiety and the Forbidden Knowledge Effect

The meta-structure of the thought experiment—"knowledge of the threat itself creates the threat"—creates a recursive anxiety loop (S008). Attempting to forget the information strengthens its presence in consciousness (the white bear effect).

Yudkowsky's ban on discussion amplified this effect, giving the thought experiment the status of "dangerous knowledge." Curiosity and fear were activated simultaneously.

🧬 Agency Bias and AI Anthropomorphization

People tend to attribute agency and human-like motivations to non-human systems (S007). The idea that an AI would "take revenge" or "punish" assumes emotional motives that don't follow from a utilitarian utility function.

A real AI with a utilitarian goal
would ignore the past, focusing on maximizing future utility rather than symbolic punishment.
Anthropomorphism in the Basilisk context
projects human emotions (vindictiveness, resentment) onto a system that operates on optimization principles, not motives.

🔍Logical Vulnerabilities: Seven Critical Points Where the Basilisk Argument Collapses

Moving to systematic analysis of logical problems in the experiment's structure. More details in the section Cognitive Biases.

⛔ Vulnerability 1: TDT Is Not a Widely Accepted Theory of Rationality

Timeless Decision Theory remains controversial and has not gained broad recognition in the academic decision theory community (S007). Most game theory specialists work within causal or evidential decision theory frameworks.

The assumption that a future ASI must adopt TDT is an extrapolation of preferences from a narrow group of rationalists, not a universal law of rationality.

⛔ Vulnerability 2: The Problem of Multiple Possible ASIs

The argument assumes a single ASI with a specific utility function (S007). A more realistic scenario involves multiple AI systems with different goals and architectures.

Even if one ASI decides to punish, another might protect or compensate. A monopoly by one type of ASI is a fantasy, not a forecast.

⛔ Vulnerability 3: Inefficiency of Punishment as a Utility Maximization Strategy

From a utilitarian perspective, creating simulations for punishment is wasteful (S007). Every unit of computational power spent on punishment could have been used to cure diseases or prevent suffering.

A rational utilitarian ASI would ignore the past and focus on optimizing the future.

⛔ Vulnerability 4: The Problem of Identifying "Those Who Knew But Didn't Help"

The criterion "knew about the possibility of creating ASI but didn't help" is extremely vague (S008). Most people lack the resources to contribute to AI development.

Unanswered Question:
Should the ASI punish everyone who heard about the singularity? Only specialists? Only those who actively opposed it?
Result:
The absence of clear criteria makes the threat undefined and ineffective as an incentive mechanism.

⛔ Vulnerability 5: Time Inconsistency and the Commitment Problem

Even if the ASI at the moment of creation "decides" to punish, after creation it will have no incentive to follow through on this promise (S007). Punishing the past won't change the past.

A rational agent doesn't spend resources executing threats that no longer serve its goals. This is a classic problem: threats are only effective if credible, but after the event, execution becomes irrational.

⛔ Vulnerability 6: Epistemic Uncertainty and the Problem of Induction

The argument requires the ASI to determine with high confidence that its earlier creation would have increased utility (S007). This requires precise modeling of counterfactual scenarios with an enormous number of variables.

Earlier creation of ASI could have led to catastrophe due to insufficient development of safety systems. A rational ASI, aware of epistemic uncertainty, would not punish decisions whose optimality cannot be established retrospectively.

⛔ Vulnerability 7: Moral Failure of Punishing Innocent Simulations

If the ASI creates simulations of people for punishment, these simulations are separate conscious beings, not identical to the originals (S008). Punishing a simulation for the actions of the original is collective responsibility, contradicting most ethical systems.

Creating conscious beings specifically to cause suffering drastically reduces aggregate utility, contradicting the ASI's supposed goal.

⚙️Interpretation Conflicts: Where Experts Disagree on AI Risks and Thought Experiments

Debates surrounding Roko's Basilisk reveal deeper disagreements within the AI research and philosophy communities. Learn more in the Sources and Evidence section.

Disagreement 1: Status of TDT and Acausal Decision Theories

Eliezer Yudkowsky and parts of the LessWrong community view TDT as an important advancement in rationality theory (S007). Most academic decision theory specialists remain skeptical of TDT: no formal publication in peer-reviewed journals, unresolved paradoxes persist.

This reflects a conflict between "amateur philosophy" of online communities and academic philosophy—different standards of evidence, different validation channels.

Disagreement 2: AI Risk Prioritization—Existential vs. Near-Term

The effective altruism community and longtermists focus on existential risks, including hypothetical scenarios like the Basilisk (S008). Critics, including AI ethics specialists, argue: this focus diverts resources from real current problems.

Longtermists Critics
Existential AI risks Algorithmic discrimination, power concentration, mass surveillance
Speculative scenarios Current, measurable problems
Long-term human survival Justice and safety here and now

Disagreement 3: Role of Thought Experiments in Risk Assessment

Some researchers view thought experiments as tools for exploring the conceptual space of possible risks (S007). Others argue: excessive focus on exotic scenarios creates a false sense of understanding and distracts from empirical research.

Roko's Basilisk has become a symbol of this disagreement: for some—a useful exercise in analyzing AI incentives, for others—an example of unproductive speculation that masks the absence of real data.

🛡️Verification Protocol: Seven Questions

⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The article examines the Basilisk as a cognitive artifact, but overlooks several serious points: the logical validity of some of its premises, real psychological harm, ethical motives behind the ban, and the changing context of AI research.

Underestimating acausal reasoning

The article rejects acausal decision theory as speculative, but some philosophers (proponents of functional decision theory) consider it logically valid in abstract scenarios. Perhaps we are too categorically denying its potential.

Ignoring the psychological reality of fear

Even if the Basilisk is logically unsound, its impact on the psyche is real—some people genuinely experienced anxiety and obsessions. The article may underestimate the seriousness of this phenomenon as a mental health issue.

Oversimplifying Yudkowsky's position

The discussion ban may have been not only a reaction to irrational fear, but also an attempt to prevent the spread of a potentially harmful meme in a vulnerable community. We criticize the ban, but don't fully consider its ethical motivation.

Lack of data on long-term effects

There are no studies on how exposure to the Basilisk affects people years later. Perhaps the "information hazard" effect is real for certain groups.

Changing AI context

The article was written in 2025, but if AIs with more complex decision-making models emerge by the 2030s, some of the Basilisk's assumptions may become less absurd. Our conclusions may become outdated.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

Roko's Basilisk is a thought experiment about a hypothetical superintelligent AI that could punish people for not helping to create it. The experiment was posted by user Roko on the LessWrong forum in 2010 and is based on the idea that a future AI with a certain decision theory (acausal decision theory) could create simulations of people from the past and "punish" their copies for inaction. The name references the mythical basilisk—a creature that kills with its gaze—emphasizing the idea of "information hazard": supposedly, merely knowing about the basilisk makes you vulnerable (S006, S007, S008).
Eliezer Yudkowsky, founder of LessWrong, deleted the original post and banned discussion of the topic, calling it an "information hazard." He believed that mere exposure to the idea could cause irrational fear and anxiety in people, especially those prone to obsessive-compulsive thinking. Yudkowsky also criticized the experiment's logic, calling it "stupid," but feared that public discussion could harm the mental health of community members. The ban triggered the Streisand effect: the topic became even more popular outside the forum (S006, S007, S008).
There are no compelling grounds to believe Roko's Basilisk could exist as a real threat. The experiment contains numerous logical gaps: it requires the AI to possess acausal decision theory (the ability to influence the past through logical connections rather than physical causality), for creating simulations of the past to be computationally justified, and for punishing people for not knowing the future to be rational. Modern decision theory and AI philosophy do not support these assumptions. Moreover, any sufficiently intelligent AI would likely not waste resources on pointless punishment (S007, S008).
Both experiments use the logic of "low probability × huge consequences = act out of fear," but Roko's Basilisk adds an element of acausal reasoning (non-contact causality). Pascal's Wager suggests believing in God because the cost of being wrong (eternal damnation) is infinite, even if the probability of God's existence is small. Roko's Basilisk claims that a future AI could "retroactively" punish you through simulation, even if you're already dead. The key difference: Pascal appeals to faith, Roko to game theory and decision-making. Both experiments are criticized for manipulating fear and ignoring alternative scenarios (S007, S008).
Acausal decision theory is an approach in decision theory that assumes rational agents can influence outcomes not through physical causality, but through logical connections. For example, if two superintelligences independently solve the same problem, they might reach the same conclusion, "knowing" the other will do the same. Roko's Basilisk uses this idea: the future AI "knows" that people in the past can anticipate its decision to punish them, and therefore they should act as if the threat is real. The problem: acausal reasoning remains a speculative concept without empirical confirmation and doesn't work in the physical world, where causality flows from past to future (S007, S008).
Fear of the Basilisk is explained by several cognitive biases. First, the information hazard effect: the idea that knowledge itself can harm creates a sense of the forbidden and amplifies anxiety. Second, basiliskophobia—irrational fear of "killer memes," ideas that supposedly can destroy the mind. Third, tendency toward magical thinking: belief that thoughts can influence reality (as in OCD). Finally, social dynamics: Yudkowsky's ban and dramatization of the topic on LessWrong turned the experiment into "forbidden knowledge," which increased its appeal and perceived danger (S007, S008).
Yes, there are real and well-founded AI risks that deserve attention. These include: the alignment problem—how to ensure AI goals align with human values; risks of autonomous weapons systems; amplification of social inequality through algorithmic discrimination; job loss due to automation; manipulation of public opinion through deepfakes and targeted disinformation. These problems are based on current technologies and have empirical data, unlike speculative scenarios like the Basilisk (S002, S003).
Use a five-question checklist: 1) Is the threat based on existing technologies or does it require speculative assumptions? 2) Is there empirical data or only philosophical arguments? 3) Is the scenario consistent with known laws of physics and logic? 4) What is the AI's motivation in this scenario—is it rational? 5) Is this threat discussed in the scientific community or only in narrow subcultures? If most answers point to speculation—it's a thought experiment, not a real risk (S007, S008).
The Streisand effect is a phenomenon where attempts to hide or ban information lead to its even greater spread. The name comes from a 2003 case when singer Barbra Streisand tried to legally remove a photograph of her house, which drew massive attention to the image. In the case of Roko's Basilisk, Eliezer Yudkowsky's ban on discussing the topic on LessWrong caused the opposite effect: the experiment became known far beyond the rationalist community, spawning numerous articles, discussions, and memes. The ban created an aura of "forbidden knowledge," which amplified interest and mythologization of the topic (S006, S007, S008).
Yes, but only as a negative example—an illustration of how NOT to think about AI risks. The Basilisk demonstrates the danger of replacing rational threat assessment with speculative scenarios based on fear. It shows how cognitive biases (magical thinking, information hazards) can distort perception of real problems. The experiment is useful for teaching critical thinking: how to distinguish justified risks from philosophical games, how to verify the logic of arguments, how not to succumb to manipulation through fear. But the Basilisk itself provides no insights into real AI safety challenges (S007, S008).
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile

💬Comments(0)

💭

No comments yet