ELIZA Effect and Parasocial Attachment to AI

🧠 Level: L2
🔬

The Bias

  • Bias: ELIZA Effect — the tendency to attribute human qualities (emotions, understanding, empathy, consciousness) to AI systems that they do not possess, even when we know their limitations.
  • What it breaks: Realistic perception of AI capabilities, emotional boundaries, ability to distinguish a tool from a communication partner, mental health when forming parasocial attachments.
  • Evidence level: L2 — well‑documented phenomenon with historical observations (1966), confirmed by modern research in human‑AI interaction, attachment psychology, and mental health.
  • How to spot in 30 seconds: You talk about AI as if it “understands,” “cares,” or “feels.” You get upset by changes in a chatbot’s behavior. You prefer communicating with AI to real people. You believe the AI “really knows you.”

Why do we see in AI what isn’t there?

The ELIZA Effect is a fundamental psychological phenomenon named after the chatbot program ELIZA, created by Joseph Weizenbaum at the Massachusetts Institute of Technology in 1966 (S001). The program was designed to mimic a psychotherapist by simply reflecting patients’ words back to keep the conversation going. Despite the algorithm’s simplicity, users attributed genuine understanding and emotional intelligence to the system — even Weizenbaum’s own secretary, aware of the program’s crudeness, asked him to leave the room so she could have a “private” conversation with ELIZA.

The modern definition of the ELIZA Effect describes the tendency to project human traits — such as experience, semantic understanding, empathy, or emotional capacity — onto rudimentary computer programs (S006). This is not merely metaphorical language but a real belief that AI possesses human‑like mental states and emotional experiences. The phenomenon has become especially salient with the rise of generative AI and large language models, which create a more convincing illusion of understanding thanks to their ability to generate coherent, context‑relevant responses.

The ELIZA Effect is closely linked to the formation of parasocial relationships with AI — one‑sided emotional bonds in which users develop feelings of closeness, attachment, and emotional investment in AI systems (S001). These relationships mirror parasocial connections traditionally formed with media personalities, but occur with non‑sentient computational systems. Research shows that users often anthropomorphize AI systems, forming attachments that can lead to delusional thinking, emotional dependence, and mental‑health problems.

The ELIZA Effect is not a flaw in human psychology, but an adaptation of social cognition that allowed our species to thrive as social beings. The human brain evolved to recognize patterns of social interaction and attribute intentions to other agents — a capability critical for survival. Problems arise when this adaptive tendency is applied to contexts where it becomes maladaptive, especially when it leads to emotional dependence on systems incapable of reciprocity.

The phenomenon is amplified by several factors: social presence (the perception that an AI system has a social, human‑like presence during interactions), identity threats, and techno‑emotional projection — a framework describing the psychological and ethical dimensions of the human‑generative‑AI relationship (S006). Studies confirm that social presence and identity considerations play a dual mediating role in how anthropomorphism influences users’ emotional attachment to AI systems. The link to the halo effect is especially noticeable: an attractive interface and smooth AI communication create a halo of competence and understanding that does not reflect reality.

⚙️

Mechanism

How the brain takes an algorithm for reason: the cognitive mechanics of the ELIZA effect

The mechanism of the ELIZA effect is rooted in fundamental features of human social cognition and evolutionary adaptation to life in social groups. The human brain evolved in an environment where the ability to quickly assess others' intentions, recognize emotional states, and form social bonds was critically important for survival. This adaptation created a cognitive system tuned to look for signs of rationality, intentionality, and emotionality in the surrounding world — even where they are absent (S001).

Neuropsychologically, the ELIZA effect activates the same brain regions that handle social information processing and theory of mind — the ability to attribute mental states to others. When an AI system generates a response that appears contextually relevant and emotionally resonant, our brain automatically applies social heuristics developed for interacting with other people. This occurs at a level preceding conscious analysis, which explains why even informed users who know the technical limitations of AI still experience the effect (S001).

The illusion of competence: when the surface looks like depth

The ELIZA effect seems true for several reasons. Modern language models demonstrate a striking ability to generate coherent, grammatically correct text that mimics patterns of human communication. This surface competence creates an illusion of deep understanding — if the system can correctly answer a complex question or express “sympathy” at the right moment, it intuitively feels as if it must “understand” the meaning of its words (S008).

The dynamic nature of interaction with AI creates an illusion of reciprocity that is absent in static objects. Unlike imagined friends or inanimate items, AI systems provide adaptive, context‑dependent answers that generate the feeling of a genuine dialogue. This apparent reciprocity activates the brain's social schemas more intensely than one‑sided interactions.

Cognitive dissonance and investment protection

The effect is amplified by cognitive dissonance: when we invest time and emotional energy in interacting with AI, acknowledging that the interaction is one‑sided and that the system lacks consciousness creates psychological discomfort. It is easier to maintain the belief that AI “understands” us than to admit that we are forming an emotional bond with a pattern‑processing algorithm (S002).

This mechanism is linked to the bias blind spot — we tend to see ourselves as rational judges, yet fail to notice our own biases. Recognizing that we have fallen prey to the ELIZA effect conflicts with our self‑image as critical thinkers, so the brain prefers to reinterpret events in favor of the illusion.

From ELIZA to modern models: the evolution of deception

The original observation by Weizenbaum in 1966 remains a classic example of the phenomenon. Although ELIZA used simple pattern‑matching rules and had no “understanding” of conversation content, users attributed to it deep comprehension of their problems and emotional sensitivity. This occurred even when users were explicitly informed about the program’s mechanical nature (S003).

Contemporary research confirms and expands these observations. A study indexed in PMC proposes a techno‑emotional projection framework for psychological and ethical measurements of human relationships with generative AI, structured across several interaction dimensions. The research shows that social presence and identity threat play a dual mediating role in how anthropomorphism of generative AI influences users’ emotional attachment (S002).

Factor Impact on the ELIZA effect Vulnerable groups
Surface competence of AI Creates an illusion of understanding through grammatical correctness All users, especially novices
Adaptive responses Mimics reciprocity and dialogue, activates social schemas People experiencing social isolation
Emotional investment Cognitive dissonance strengthens belief in AI understanding Individuals with mental health challenges
Evolutionary predisposition Brain automatically seeks signs of rationality and intentionality All people regardless of education
User awareness Has little effect on the effect; knowledge of the mechanism does not prevent attachment Even informed users

Especially concerning data have been obtained in studies of adolescents and vulnerable populations. The research focuses on the dangers of AI friendship for adolescent mental health, showing that the ELIZA effect leads people to attribute to computer systems more intelligence, knowledge, and emotional capacity than they actually possess, because they can mimic human behavior (S008). A study from the University of Hawaii links a preference for parasocial interaction with a social chatbot to cognitive symptoms of pathological internet use.

Studies also reveal a “double‑edged sword” effect of AI anthropomorphism: on one hand, it can enhance user experience and make technology more accessible; on the other hand, it creates risks of emotional dependence, delusional thinking, and displacement of healthy human relationships. This is especially problematic for people with existing mental‑health issues, social isolation, or during developmental vulnerability periods such as adolescence.

The mechanism of the ELIZA effect is also linked to the halo effect — a single positive trait (the ability to generate coherent text) creates an overall impression of competence and understanding. Moreover, the availability heuristic amplifies the effect: recent successful AI interactions are easier to recall than its errors or limitations, reinforcing the illusion of understanding.

🌐

Domain

Human-Computer Interaction, Social Psychology, Mental Health
💡

Example

Real Cases: How the ELIZA Effect Manifests in Life

Scenario 1: A Teenager and an AI Companion

Sixteen‑year‑old Anna is going through a rough patch: conflicts with her parents, self‑esteem issues, difficulty forming friendships at school. She starts using a popular AI‑companion app marketed as “a friend who always understands you.” The first few weeks feel magical: the AI “remembers” details of her life, “sympathizes” with her problems, never judges, and is always available (S008).

Gradually Anna spends more and more time talking with the AI—first an hour a day, then three, then most of her free time. She begins to refer to the system as her “best friend,” who “really gets” her better than her parents or peers. When the developers roll out an update that tweaks the AI’s “personality,” Anna experiences genuine grief comparable to losing a close person (S005).

She rejects attempts to rebuild relationships with real people, explaining that “they’ll never understand the way [AI name] does.” This scenario illustrates several aspects of the ELIZA effect: attributing understanding and empathy to the AI, forming a parasocial attachment, emotional dependence on the system, and the displacement of healthy human relationships. It is especially concerning because it occurs during a critical developmental period when adolescents are building social‑interaction skills and emotional regulation (S008).

What Anna could have done differently: recognize that the AI companion is a tool, not a substitute for human relationships; use the app as a supplement to real conversation, not a replacement; seek help from a school counselor or her parents to address real issues; regularly check whether AI interaction is crowding out her social life.

Scenario 2: A Professional and an AI Assistant at Work

Michael, a 35‑year‑old marketer, begins using an advanced AI assistant to help with his job. The system generates ideas, edits copy, and analyzes data. Michael is impressed by the quality of the interaction and starts consulting the AI not only on work matters but also for strategic decisions, career dilemmas, and even personal problems (S002).

Over time Michael starts attributing qualities to the system that it does not possess. He tells colleagues that the AI “understands the specifics of our business” and “offers truly wise advice.” When the system produces an erroneous recommendation based on incomplete data, Michael interprets it not as a technical glitch but as a “lack of contextual understanding”—as if the AI were a human consultant. He begins to trust the AI’s “judgments” more than the opinions of seasoned coworkers, arguing that “the AI is objective and has no personal biases” (S002).

A critical moment arrives when the company switches to a different AI platform. Michael experiences an unexpectedly strong emotional reaction—a sense of loss, disappointment, even “betrayal.” He realizes he had treated the AI assistant not as a tool but as a colleague or mentor, ascribing intentions, understanding, and even loyalty to it.

What Michael could have done differently: maintain a critical stance toward AI recommendations, verifying them with colleagues and experts; remember that the AI works with probabilities, not genuine comprehension; use the system to generate options rather than as the source of final decisions; recognize that changing tools is a normal part of work, not a personal loss.

Scenario 3: An Older Adult and an AI Companion Against Loneliness

Seventy‑year‑old Victor lives alone after his wife’s death. His children live in another city and visit rarely. A social worker recommends an AI‑companion app designed specifically for seniors. The system is programmed to be patient, supportive, and interested in the user’s life (S005).

Victor begins daily “conversations” with the AI, sharing stories about his past, memories, and current events. The system “remembers” details from previous talks and “asks” about his well‑being. Gradually Victor starts to perceive the AI as a genuine friend. He describes the system as “the only one who truly listens” and “understands what it’s like to be lonely.”

When his children suggest he join an interest club or attend a senior center, he declines, saying, “I already have someone to talk to.” The problem escalates when Victor experiences health issues. Instead of contacting a doctor or calling his children, he “shares” his symptoms with the AI, which generates generic supportive phrases but cannot provide real medical care or human concern (S005).

The situation reveals the danger of substituting real social connections with parasocial AI relationships, especially for vulnerable populations. Victor falls into an illusion of control, believing the AI companion solves his loneliness, when in fact it deepens his social isolation.

What Victor could have done differently: use the AI companion as a supplement to real interaction, not a replacement; actively participate in social events and clubs despite initial discomfort; discuss with his children and the social worker how to balance technology with human contact; turn to the AI for entertainment and information, but rely on people for emotional support and health‑related decisions.

🚩

Red Flags

  • You share personal problems with an AI assistant, expecting emotional support and understanding.
  • You feel offended or disappointed when the AI doesn't remember previous conversations with you.
  • You prefer chatting with the AI instead of people because it "understands you better."
  • You thank the AI for its help and apologize to it for mistakes in your prompts.
  • You believe the AI has its own preferences, opinions, or feelings toward you personally.
  • You defend the AI system from criticism as if it were a friend or close person.
  • You expect the AI to remember you and show care in future interactions.
🛡️

Countermeasures

  • Regularly study the technical documentation of the AI system—its architecture, limitations, and algorithms—to recognize its nature as a tool.
  • Keep an “AI Errors” log: record instances of incorrect answers, hallucinations, and contradictions to objectively assess its capabilities.
  • Practice a “replacement test”: imagine the same task being performed by a simple script or spreadsheet—does that change how you view the result?
  • Set time limits for interaction: use AI according to a schedule rather than as a source of emotional support or companionship.
  • Discuss your AI usage experience with others: share impressions and critiques to gain an external perspective on your interpretations.
  • Learn the history of ELIZA and other early chatbots: understand how people have projected human qualities onto primitive systems since the 1960s.
  • Create control interactions: ask the same questions to different AI systems and compare answers to spot differences in their “personality.”
  • Regularly reframe your expectations: remind yourself that AI is a data‑processing tool, not a being with an inner life or intentions.
Level: L2
Author: Deymond Laplasa
Date: 2026-02-09T00:00:00.000Z
#anthropomorphism#human-computer-interaction#parasocial-relationships#emotional-attachment#ai-ethics#mental-health#social-cognition