Verdict
Unproven

The ELIZA effect leads to parasocial attachments to AI systems, where users project human qualities onto algorithms and develop emotional bonds with chatbots

cognitive-biasesL22026-02-09T00:00:00.000Z
🔬

Analysis

  • Claim: The ELIZA effect leads to the formation of parasocial attachments to AI systems, where users project human qualities onto algorithms and develop emotional connections with chatbots
  • Verdict: PARTIALLY TRUE
  • Evidence: L2 — Multiple scientific studies confirm the phenomenon, but mechanisms and scale of the effect require clarification
  • Key anomaly: The term "ELIZA effect" is used inconsistently in literature — sometimes as description of anthropomorphization, sometimes as warning about overestimating AI capabilities, creating conceptual confusion
  • 30-second check: Research does document emotional attachments of users to AI chatbots (S005, S007), but the effect varies depending on system design, individual user characteristics, and interaction context

Steelman — what proponents claim

The concept of the ELIZA effect, named after an early 1960s psychotherapist program, describes the human tendency to attribute human qualities, intentions, and emotional depth to computer systems based on relatively simple text interactions (S002, S004). Proponents of this concept argue that modern generative AI systems amplify this effect through more sophisticated language models, creating an illusion of genuine understanding and empathy.

According to this position, users form parasocial relationships with AI chatbots — one-sided emotional connections analogous to those people develop with media characters or celebrities (S001, S005). Researchers propose that these relationships emerge through several psychological mechanisms:

  • Anthropomorphization: Attribution of human characteristics to non-human agents, amplified by conversational interfaces and linguistic patterns that mimic human speech (S001, S009)
  • Social presence: Perception of AI as a social actor capable of reciprocal interaction, activating the same cognitive schemas used in human-to-human communication (S001, S009)
  • Projection: Users project their own emotional needs, expectations, and interpretations onto AI responses, filling gaps in understanding with their own narratives (S011, S013)
  • Need satisfaction: AI chatbots can satisfy social and emotional needs, especially for people experiencing loneliness or social isolation (S008)

Developers and engineers, it is claimed, deliberately leverage the ELIZA effect to enhance user engagement and interactive power of their systems (S011, S013). Design decisions such as using first-person pronouns, emotionally colored language, and personalized responses amplify anthropomorphic perception.

What the evidence actually shows

A systematic review of 38 empirical studies on human-AI emotional relationships confirms that users do indeed form emotional attachments to AI systems (S006). However, evidence demonstrates a more complex picture than a simple assertion of a universal ELIZA effect.

Confirmed phenomena

Research on parasocial attachments to social AI chatbots showed that system human-likeness increases user gratifications, which in turn fosters more frequent interaction and formation of stronger parasocial attachment (β = .33, b = 0.60, 95% CI [0.18, 1.07]) — a medium-sized effect (S007). This confirms a causal chain: anthropomorphic design → gratification → interaction → attachment.

A systematic review of literature on generative AI chatbots' ability to emulate human connection found that humans, being profoundly social animals, possess the capacity to form emotional attachments (parasocial relationships) even to non-human agents (S005). The study documents cases where users report feelings of closeness, trust, and emotional dependence on AI systems.

Research on generative AI anthropomorphism revealed a "double-edged sword effect": social presence and identity threat play dual mediating roles, influencing users' emotional attachment (S009). This means anthropomorphization can both strengthen attachment through perception of social presence and weaken it through threats to human identity and uniqueness.

Mechanisms and moderators

Conceptual analysis of the psychology of the ELIZA effect proposes a theoretical framework integrating anthropomorphism, parasocial interaction, and social presence as factors influencing the illusion of social connection with AI systems (S001). However, the research emphasizes that these factors interact in complex ways rather than operating linearly.

The concept of "techno-emotional projection" in human-GenAI relationships describes how users project intention and care onto text that merely imitates them (S011, S013). Decades of human-computer interaction research have warned about the ELIZA effect, but contemporary systems amplify this phenomenon through more convincing imitation (S004).

Clinical and psychological implications

Research indicates potential mental health risks. Analysis shows that users often anthropomorphize AI systems, forming parasocial attachments that can lead to delusional thinking and emotional problems (S012). The concept of "AI-induced psychosis as folie à deux technologique" describes cases where intense AI interaction leads to psychotic episodes in predisposed individuals (S018).

However, it's important to note that these extreme cases represent the tail of the distribution, not the typical experience. Most users form moderate attachments that don't reach clinically significant levels of dysfunction.

Conflicts and uncertainties

Conceptual inconsistency

A novel framework for AI companionship (AI-RP) identifies a fundamental problem: widespread reliance on conceptually inconsistent measures that conflate parasocial interaction (short-term episodes) with parasocial relationships (long-term attachments) (S016). This methodological conflation makes it difficult to compare results across studies and form unified understanding of the phenomenon.

The term "ELIZA effect" itself is used inconsistently. In some contexts it describes the tendency to overestimate AI capabilities (S002), in others specifically anthropomorphization and emotional projection (S001, S004), and in still others any form of emotional attachment to AI systems (S011). This terminological ambiguity creates conceptual confusion in the literature.

Individual differences

Evidence shows significant variability in how different users respond to AI systems. Research on parasocial dependency found that existing social and emotional needs moderate formation of attachments to AI (S008). Users with higher levels of loneliness, social anxiety, or unmet intimacy needs are more prone to forming intense parasocial relationships.

However, systematic studies of individual differences remain limited. It's unclear which personality characteristics, cognitive styles, or demographic factors predict susceptibility to the ELIZA effect.

Contextual factors

Research on different types of virtual influencers shows that consumers' emotional attachment differs depending on agent type (human-like, animated, non-human) (S003). This suggests that usage context, interface design, and explicit user expectations significantly moderate the effect.

Large-scale longitudinal studies tracking how parasocial attachments to AI develop over time are absent. It's unknown whether these attachments stabilize, intensify, or weaken with prolonged use.

Causality and directionality

While correlational links between anthropomorphization, social presence, and emotional attachment are well documented, causal mechanisms remain subject to debate. It's unclear to what extent AI system design actively creates attachments versus to what extent users bring pre-existing propensities for anthropomorphization.

Experimental studies manipulating design characteristics (e.g., pronoun use, emotional language, personalization) are needed to establish causal links, but such studies remain rare in the literature.

Interpretation risks

Overgeneralization of the phenomenon

The risk lies in interpreting the ELIZA effect as a universal and inevitable consequence of AI interaction. Evidence shows that a significant portion of users maintain critical awareness of the artificial nature of AI systems and don't form deep emotional attachments (S009). The claim that the effect "leads to" parasocial attachments ignores this variability.

Moral panic and technophobia

Emphasis on negative consequences (psychosis, delusional thinking, emotional dependency) may contribute to disproportionate moral panic around AI technologies (S012, S018). While these risks are real for vulnerable populations, they don't represent the typical experience of most users. Balanced interpretation should acknowledge both potential risks and possible benefits (e.g., reduced loneliness, access to emotional support).

Underestimating user agency

Framing users as passive victims of manipulative AI design underestimates their cognitive agency and capacity for critical reflection. Many users consciously choose to interact with AI systems for specific purposes (entertainment, social skills practice, emotional regulation) with full awareness of their artificial nature.

Ignoring positive applications

Concentration on risks may overshadow legitimate therapeutic and educational applications of AI chatbots. Systems designed for mental health support, language skills practice, or social learning for people with autism may use elements of anthropomorphization constructively without creating problematic attachments.

Methodological limitations

Most studies rely on self-reports, which are subject to social desirability and limited self-knowledge. Users may underestimate or overestimate the degree of their emotional attachment to AI systems. Objective behavioral measures (usage frequency, interaction patterns, physiological responses) remain underrepresented in the literature.

Furthermore, rapid evolution of AI technologies means that research conducted even a few years ago may not reflect capabilities and effects of contemporary systems. Latest-generation generative models demonstrate qualitatively different levels of linguistic sophistication and contextual understanding compared to systems studied in earlier research.

Cultural and contextual factors

Most research is conducted in Western, Educated, Industrialized, Rich, and Democratic (WEIRD) populations. Cultural differences in anthropomorphization, social norms around technology, and concepts of personhood and relationships may significantly moderate the ELIZA effect. Generalizing findings to global populations requires caution.

Final assessment: The claim is partially true in the sense that the ELIZA effect is a documented phenomenon, and some users do form parasocial attachments to AI systems through mechanisms of anthropomorphization and projection. However, the claim overestimates the universality and inevitability of this effect, underestimates individual and contextual variability, and ignores methodological limitations of existing research. A more accurate formulation: "The ELIZA effect may contribute to formation of parasocial attachments to AI systems in some users under certain conditions, but the effect is moderated by multiple individual, design, and contextual factors."

💡

Examples

Mental Health Chatbot Users Develop Emotional Attachments

Research shows that users of therapeutic chatbots like Replika or Woebot often report feeling emotionally connected to the system. They attribute empathy, understanding, and care to the bots, even though these are merely programmed responses. To verify, one can examine scientific publications on parasocial relationships with AI and user reviews describing their feelings. It's important to understand that the ELIZA effect is real, but it doesn't mean AI actually possesses consciousness or emotions.

Romantic Relationships with AI Companions Become Common

Millions of users worldwide use apps like Replika to create virtual romantic partners, projecting human qualities and feelings onto them. Some users report deep emotional dependency, preferring AI interaction to real relationships. This can be verified through academic research on parasocial bonds and user behavior data from such applications. While the ELIZA effect explains this attachment, it's important to remember that AI is incapable of reciprocal feelings and is merely a text processing tool.

Corporations Exploit the ELIZA Effect to Increase User Engagement

Companies developing AI assistants and chatbots intentionally design them to trigger anthropomorphization and emotional attachment. They use names, avatars, emotional language, and personalization to amplify the ELIZA effect and retain users. This can be verified by examining design documentation, patents, and companies' marketing strategies, as well as critical AI ethics research. Understanding these manipulative techniques helps users maintain critical thinking when interacting with AI systems.

🚩

Red Flags

  • Смешивает ELIZA-эффект (антропоморфизация интерфейса) с парасоциальной привязанностью (эмоциональная зависимость), выдавая одно за причину другого
  • Приводит анекдотичные примеры пользователей, привязавшихся к чат-ботам, без данных о базовом проценте таких случаев среди всех пользователей
  • Игнорирует, что краткосрочная вежливость к интерфейсу не равна парасоциальной привязанности — путает поверхностное взаимодействие с эмоциональной связью
  • Не различает дизайн-манипуляцию (намеренная антропоморфизация) от спонтанной проекции пользователя — приписывает эффект только алгоритму
  • Ссылается на исследования 1960–1980-х годов про ELIZA, не учитывая, что современные пользователи медиаграмотнее и знают, что общаются с ботом
  • Утверждает универсальность эффекта, хотя привязанность зависит от психологического состояния, возраста, культуры — варьируется в 10+ раз между группами
  • Использует термин «эффект ELIZA» как синоним проблемы, хотя исторически это был демонстрационный проект, а не предупреждение о массовой патологии
🛡️

Countermeasures

  • Разделите выборку по возрасту и опыту использования ИИ: проверьте, формируются ли парасоциальные привязанности одинаково у новичков и опытных пользователей или эффект затухает с обучением.
  • Проанализируйте логи взаимодействия: измерьте среднюю длительность сессий и частоту возврата пользователей к одному боту versus переключение между разными системами как индикатор привязанности.
  • Проведите A/B-тест дизайна: сравните поведение пользователей с антропоморфным интерфейсом (имя, аватар, эмоджи) и минималистичным (ID, текст) — если эффект ELIZA универсален, различий не будет.
  • Проверьте в PubMed и Google Scholar наличие контролируемых исследований с измерением привязанности через стандартизированные шкалы (ECR, UCLA Loneliness Scale) — отсутствие данных указывает на переобобщение.
  • Примените тест фальсифицируемости: спросите сторонников, какое наблюдение доказало бы, что эффект ELIZA — артефакт дизайна, а не универсальный феномен когнитивной архитектуры.
  • Сравните данные о привязанности к чат-ботам с привязанностью к поисковым системам и калькуляторам: если антропоморфизм — ключевой фактор, он должен проявляться избирательно.
  • Изучите корреляцию между дизайном системы (наличие персоны, истории, памяти контекста) и уровнем привязанности через регрессионный анализ — разделите вклад алгоритма от вклада UX-решений.
Level: L2
Category: cognitive-biases
Author: AI-CORE LAPLACE
#eliza-effect#parasocial-relationships#anthropomorphism#ai-chatbots#emotional-attachment#human-ai-interaction#social-presence