Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /Critical Thinking
  3. /Reality Check
  4. /Media Literacy
  5. /The Collective Digital Unconscious: How ...
📁 Media Literacy
⚠️Ambiguous / Hypothesis

The Collective Digital Unconscious: How Algorithms Create Modern Myths and Why We Believe Them

Social media algorithms and recommendation systems are forming a new type of collective memory and mythology, acting as a digital analog of the Jungian unconscious. Research shows that digital tools don't simply store information, but actively construct narratives that become shared "truths" for millions of people. This article examines the mechanisms of digital myth creation, their difference from traditional collective memory, and protocols for cognitive defense against algorithmic manipulation.

🔄
UPD: February 4, 2026
📅
Published: February 1, 2026
⏱️
Reading time: 13 min

Neural Analysis

Neural Analysis
  • Topic: Mechanisms of collective narrative formation through algorithmic systems and their influence on contemporary mythology
  • Epistemic status: Moderate confidence — the concept sits at the intersection of digital sociology, cognitive psychology, and memory studies; empirical base is growing, but long-term effects remain understudied
  • Level of evidence: Primarily observational studies, digital trace data analysis, theoretical models of collective memory; controlled experiments are limited by ethical constraints
  • Verdict: Algorithms do create new forms of collective narratives, but calling this the "unconscious" is a metaphor requiring caution. Digital tools open unprecedented opportunities for studying collective memory formation both within and beyond digital spaces.
  • Key anomaly: Conceptual substitution: algorithmic content curation is presented as spontaneous collective unconscious, though it's driven by specific business models and engineering decisions
  • 30-second check: Open three different social networks and compare the top 5 trends — if they're identical, this isn't "collective unconscious" but synchronized algorithmic agenda-setting
Level1
XP0
🖤
Every day, billions of people immerse themselves in digital spaces where algorithms invisibly shape what we consider true, important, or real. These systems create a new type of collective memory—a digital unconscious that functions like a modern analog of Jungian archetypes, but governed not by evolution, but by mathematical models. Research shows that recommendation algorithms don't simply reflect our preferences—they actively construct narratives that become shared "truths" for entire generations (S003). This article explores the mechanisms of digital myth creation, their distinction from traditional collective memory, and protocols for cognitive defense.

📌The Digital Unconscious as a New Form of Collective Memory: Defining the Phenomenon and Its Boundaries

Carl Jung's term "collective unconscious" described universal psychic structures transmitted across generations and cultures. The digital unconscious is a qualitatively different phenomenon: a system of shared representations, narratives, and "truths" formed not by biological evolution or cultural transmission, but by algorithmic systems that determine what information reaches users' consciousness (S009).

Unlike traditional collective memory, which formed over centuries through oral tradition, writing, and institutional practices, the digital unconscious is created and modified in real time. More details in the section Reality Verification.

⚠️ Key Distinction: From Organic Transmission to Algorithmic Curation

Traditional collective memory formed through social interaction among multiple agents—storytellers, historians, teachers, journalists—who participated in selecting and transmitting information. Digital tools have radically changed this process: social media and search engine algorithms now serve as the primary curators of collective memory (S003).

Parameter Organic Transmission Algorithmic Curation
Formation Speed Centuries, generations Hours, days, weeks
Selection Agents Multiple independent actors Centralized algorithmic systems
Distribution Criteria Cultural significance, social consensus Engagement metrics (likes, clicks, view time)
Mechanism Visibility Transparent to participants Hidden from most users

🧩 Three Levels of the Digital Unconscious

First level—personal algorithmic bubble: individual news feeds, recommendations, search results adapted to a specific user.

Second level—group narratives: shared stories, memes, and event interpretations spreading within specific communities and amplified by algorithms that reward engagement.

Third level—global digital myths: narratives that reach critical mass and become part of public discourse, influencing politics, economics, and culture (S009).

Digital tools open new possibilities for studying collective memory formed both within and outside digital space (S003). But this same system creates an illusion of spontaneous consensus, where the repetition of algorithmic recommendations is perceived as confirmation of truth.

🔎 Boundaries of the Phenomenon: Where Algorithm Ends and Human Choice Begins

The digital unconscious is not a fully deterministic system. Users retain agency—the ability to critically evaluate information, seek alternative sources, and consciously move beyond algorithmic recommendations.

User Agency
Theoretically exists; practically rarely realized. Most users are unaware of the degree of algorithmic influence on their information diet and rarely take active steps to diversify sources (S002).
Boundary Blurring
The boundary between algorithmic curation and human choice becomes increasingly blurred, especially when algorithms use machine learning to predict and shape user preferences.

The result: users believe they are choosing information independently, while the algorithm has already predetermined the spectrum of available options. This creates an illusion of freedom under actual direction.

Visualization of three levels of the digital unconscious from personal bubble to global myths
Three concentric layers of the digital unconscious: personal algorithmic bubbles, group narratives, and global digital myths, each amplifying and shaping the next level

🧱Steel Version of the Argument: Why Algorithms Actually Create Modern Myths

Before critically analyzing the phenomenon of digital unconscious, it's necessary to present the most compelling arguments for why algorithms actually function as creators of modern mythology. The steel version of the argument requires examining the strongest evidence and logical constructions supporting this position. More details in the Thinking Tools section.

🔁 First Argument: Algorithms Create Shared Reality Through Attention Synchronization

Traditional myths functioned as shared narratives that synchronized community attention and values. Modern algorithms perform an analogous function, but with unprecedented speed and scale.

When millions of users simultaneously see the same trends, news, or memes in their feeds, it creates an effect of shared reality—a sense that "everyone is talking about this." This attention synchronization creates social pressure: users feel the need to know about trending topics, form opinions about them, participate in discussions.

Algorithms don't just distribute information—they create an illusion of consensus through simultaneous impact on millions of minds.

⚠️ Second Argument: Algorithms Amplify Emotionally Charged Narratives

Research shows that social media algorithms are optimized to maximize engagement, which systematically leads to amplification of emotionally charged content—especially that which triggers anger, fear, or outrage (S002).

This optimization creates a distorted picture of reality, where conflicts, threats, and scandals appear more widespread and significant than they actually are. Such systematic bias in information presentation shapes collective perceptions of the world that can differ significantly from objective reality.

🧬 Third Argument: Algorithms Create Self-Confirming Information Ecosystems

Content personalization leads to the creation of information bubbles, where users predominantly encounter information confirming their existing beliefs (S003).

  1. User sees content matching their views
  2. Engages with it (like, comment, share)
  3. Algorithm interprets this as preference
  4. Shows even more similar content
  5. Bubble becomes increasingly isolated

Over time, these bubbles create parallel realities, where different user groups have radically different perceptions of the same events or phenomena.

🕳️ Fourth Argument: Algorithms Fill Information Voids With Their Own Constructions

When users search for information about obscure or emerging phenomena, algorithms often fill these information voids with content optimized for engagement rather than accuracy.

This is particularly problematic in cases of developing events or complex scientific topics, where quality information may be limited. In such situations, algorithms can disproportionately amplify speculative, sensational, or misleading content that becomes the de facto source of "knowledge" for millions of users.

🧩 Fifth Argument: Algorithms Create New Forms of Social Proof

Engagement metrics—likes, shares, views—function as a new form of social proof, signaling to users what is important, true, or valuable (S002).

Algorithms amplify this effect by showing high-engagement content to more users, creating a snowball effect. Virality becomes a proxy for truth: if millions of people have shared information, it's perceived as more credible, regardless of its actual accuracy.

Social proof in the digital environment works faster and at greater scale than in traditional communities, but the mechanism remains the same: the majority is wrong together.

🔁 Sixth Argument: Algorithms Create Temporal Structure of Collective Experience

Traditional myths organized time through cyclical narratives—seasonal holidays, rituals, anniversaries. Algorithms create a new temporal structure through trends, viral moments, and "main events of the day."

This structure creates a sense of shared time and shared experience: users worldwide simultaneously experience the same digital events, discuss the same topics, participate in the same discussions. Synchronization creates a powerful sense of belonging to a global community united by shared digital experiences.

⚙️ Seventh Argument: Algorithms Use Machine Learning to Predict and Shape Preferences

Modern algorithms don't just react to user behavior—they predict future preferences and actively shape them. The use of machine learning and artificial intelligence in the digital sphere is in early stages, but already demonstrates significant potential (S006).

These systems analyze massive volumes of user behavior data, identify patterns, and create models that predict which content will generate the greatest engagement. This creates a feedback loop where algorithms not only reflect user preferences but actively shape them, offering content that maximizes certain types of reactions.

Understanding these mechanisms is critical for analyzing how algorithms transform connection into dependency and why logical fallacies become the foundation of collective beliefs.

🔬Evidence Base: What Research Says About Digital Collective Memory Formation Mechanisms

Empirical data on digital collective memory accumulates slowly—this is a young field, and many mechanisms remain insufficiently studied. Digital traces provide unprecedented access to collective representation formation processes in real time (S003).

📊 Digital Tools as New Research Methods

Digital trace data allows researchers to analyze how information spreads, transforms, and persists in networks. This provides access to processes that were previously unavailable for direct observation. For more details, see the Sources and Evidence section.

However, the application of AI and machine learning in digital data analysis is still in its early stages (S006). Rapidly growing data volumes, evolving content tactics, and ethical challenges of behavioral analysis mean that conclusions about digital unconscious mechanisms remain preliminary.

🧾 Methodological Challenges

The main problem is algorithmic opacity. Social media platforms do not disclose recommendation system details, making independent research of their influence difficult.

Algorithms are constantly updated, causing research findings to quickly become outdated. Studies identify contributions, limitations, and gaps in existing work, shedding light on the potential and constraints of AI techniques (S006).

🔬 Filter Bubbles: Mixed Evidence

Filter Bubble Hypothesis
Social media users encounter more homogeneous content than through traditional media.
Counter-Evidence
Many users actively seek diverse sources; algorithmic personalization does not always lead to radical isolation (S002).
Conclusion
A nuanced understanding is needed of how algorithms affect the information diet of different user groups.

📊 Emotionally Charged Content: Compelling Evidence

Social media algorithms systematically amplify emotionally charged content. Posts that trigger anger, outrage, or fear receive significantly more engagement and spread more widely (S002).

This creates systematic bias in what information reaches the largest audience, potentially distorting collective perceptions of phenomenon prevalence and issues. The mechanism operates independently of content truthfulness—emotional charge, not verification, determines visibility.

The connection between algorithmic amplification and collective memory formation is direct: what spreads more widely becomes more "real" in the collective consciousness, regardless of the actual prevalence of the phenomenon.

For deeper understanding of influence mechanisms, see the analysis of social media algorithms and the study of addictive design.

Schematic representation of methodological limitations in algorithmic influence research
Visualization of key methodological challenges: algorithmic opacity, rapid system evolution, ethical constraints, and data scale issues

🧠Mechanisms of Influence: How Algorithms Shape Collective Representations at the Neurocognitive Level

Algorithms influence collective memory through neurocognitive mechanisms that affect individual and collective consciousness. The distinction between correlation and causation is critical: multiple confounders may explain observed effects. More details in the Epistemology section.

🧬 Neuroplasticity and Digital Habit Formation

Repeated interaction with platforms creates stable neural patterns. When users regularly receive certain types of content in specific contexts, the brain begins to anticipate these patterns, creating automatic responses.

Many users automatically open social media apps during moments of boredom or waiting—these actions become deeply ingrained habits supported by brain neuroplasticity. This explains the compulsive nature of use: behavior becomes reinforced independent of conscious intention.

🔁 Dopamine Loops and Reinforcement Mechanisms

Social media algorithms exploit reinforcement mechanisms based on the brain's dopamine system. Unpredictable rewards—new likes, comments, interesting content—create a pattern of intermittent reinforcement, one of the most powerful ways to establish persistent behavior.

Users constantly check feeds anticipating the next "reward," even when it brings no meaningful satisfaction. The mechanism works precisely because the reward is unpredictable—the brain remains in a state of seeking.

🧷 Cognitive Load and Heuristic Thinking

The enormous volume of information in digital spaces creates high cognitive load. Under conditions of information overload, people rely on heuristics—mental shortcuts that allow quick decision-making without deep analysis.

Algorithms exploit this tendency by providing content that is easily processed and matches existing mental models. This leads to systematic cognitive biases, where simplified or distorted representations are accepted as accurate reflections of reality. The connection to logical fallacies in media literacy is direct: lack of critical analysis skills exacerbates the effect.

🧠 Social Proof and Conformity

  1. Engagement metrics (likes, shares, comments) activate neural systems of social validation
  2. Visibility of scale (millions of people shared) amplifies the effect of group belonging
  3. Evolutionary mechanisms adapted for small groups trigger at the scale of millions
  4. Result: uncritical acceptance of popular narratives regardless of factual accuracy

These mechanisms evolved for navigating social groups, but in digital environments they lead to mass conformity. Social media algorithms amplify this effect by making popularity visible and measurable.

🔁 Confirmation Bias and Algorithmic Amplification

Confirmation bias—the tendency to seek, interpret, and remember information that confirms existing beliefs—is a universal cognitive feature. Personalization algorithms amplify this bias by systematically providing content that matches a user's previous preferences and interactions.

Process Level Innate Mechanism Algorithmic Amplification Result
Information Seeking Person seeks confirming data Algorithm shows relevant content first Illusion of consensus
Interpretation Person reinterprets contradictory facts Algorithm filters contradictory content Polarization of views
Memory Person better remembers confirming facts Algorithm repeats confirming content Persistent beliefs

Synergistic effect: innate cognitive bias is amplified by the technological system, leading to more pronounced polarization and more persistent beliefs. Infinite scroll mechanics exacerbate this process, keeping users in a cycle of confirmation.

⚠️Conflicts and Uncertainties: Where Sources Diverge and What Remains Unclear

An honest analysis of the digital unconscious phenomenon requires acknowledging areas where research yields contradictory results or where data is insufficient for definitive conclusions. More details in the Cognitive Biases section.

Debates About the Scale of Filter Bubbles

The main conflict in the literature concerns the actual scale and impact of filter bubbles. One position: algorithmic personalization creates radically isolated information ecosystems where users virtually never encounter alternative viewpoints.

The opposing position: most users receive more diverse information than the popular bubble narrative suggests (S002). Traditional media also created forms of information segregation. This uncertainty reflects the complexity of measuring information diversity and differences in research methodologies.

The problem isn't that bubbles exist, but that we don't know their real boundaries and strength of influence on collective thinking.

Unclear Causal Relationships

Many studies demonstrate correlations between social media use and certain beliefs or behaviors, but establishing causal relationships remains a complex challenge.

Example: a correlation is observed between use of certain platforms and political polarization. But it's unclear whether algorithms cause polarization, or whether already polarized users choose certain platforms and content. This problem is complicated by multiple confounders: other factors that influence both social media use and belief formation.

  1. Correlation between platform and belief is documented
  2. Direction of causality is unknown
  3. Third variables (education, income, age) may explain both variables
  4. Longitudinal data is rare

Differences in Impact Across Demographic Groups

Research shows significant differences in how algorithms affect different demographic groups. Age, education, digital literacy, cultural context—all these factors modulate susceptibility to algorithmic influence.

However, systematic data on these differences remains limited. Many studies focus on specific populations (often young, educated users from Western countries), which limits the generalizability of results. This creates a blind spot: we don't know how algorithms affect older people, users with low digital literacy, or audiences outside the English-speaking internet.

Limitations in Understanding Long-Term Effects

Most research on algorithmic influence on belief formation is short-term or cross-sectional. Long-term longitudinal studies that could track how algorithmic exposure affects beliefs and behavior over years remain rare.

We study the effect of algorithms as a snapshot, not as a film. The cumulative effects of constant exposure to algorithmically curated content on worldview formation and collective memory remain unknown.

This uncertainty has practical significance: without understanding long-term effects, it's difficult to predict how the digital unconscious will shape collective memory in the next decade. The connection between short-term algorithmic exposures and long-term shifts in collective beliefs remains one of the major open questions.

🧩Cognitive Anatomy of Digital Myths: Which Psychological Mechanisms Are Exploited

The effectiveness of algorithms in creating and spreading digital myths is based on systematic exploitation of universal cognitive features and psychological vulnerabilities (S001). This isn't manipulation in the classical sense—it's resonance between platform architecture and the architecture of human perception.

A myth works because it addresses three basic needs: explaining uncertainty, finding an enemy, and confirming identity. Algorithms amplify precisely these signals. More details in the section Pharmaceutical Companies Hiding Data.

Pattern Recognition and Apophenia

The brain evolved to find patterns in noise. This saved lives on the savanna, but in the digital environment it becomes a trap.

Apophenia—seeing connections where none exist—is not a perceptual error, but its basic strategy. Algorithms don't create apophenia, they feed it.

When a platform shows you 10 coincidences in a row (even random ones), the brain switches to "this is not a coincidence" mode. Confirmation bias amplifies the effect: you notice coincidences that confirm the hypothesis and ignore those that refute it.

Social Proof and Cascades

People believe not facts, but the majority. This isn't weakness—it's an adaptive strategy under conditions of uncertainty (S006).

Social Proof
The belief that if many people believe X, then X is probably true. On platforms this is amplified by likes, shares, comments.
Information Cascade
When each subsequent person copies the previous person's decision without checking facts. Result: the myth spreads exponentially, even if the original source is wrong.

The algorithm sees that content is gaining traction and shows it to even more people. This creates an illusion of consensus, which itself becomes fact.

Emotional Valence and the Amygdala

Content that triggers fear, anger, or outrage spreads 5–10 times faster than neutral content. This isn't because people are malicious—it's because emotions signal relevance.

A myth that frightens or outrages seems more important. The algorithm knows this and prioritizes such content. Result: digital space becomes emotionally overloaded, and myths become more "alive" and convincing.

Identity and Tribalism

A myth that confirms your group identity is automatically perceived as truth. This is called motivated reasoning.

Mechanism How It Works Result
In-group Favoritism Information from "our people" seems more reliable Myth spreads within the community without verification
Out-group Hostility An enemy unites the group more strongly than shared values Myth about the enemy becomes the central narrative
Conformity Deviation from group opinion threatens exclusion Doubts are suppressed, myth is reinforced

Algorithms create filter bubbles: you only see content that resonates with your identity. This amplifies tribalism and makes alternative narratives invisible.

Cognitive Load and Heuristics

Under conditions of information overload, the brain switches to fast heuristics instead of deep analysis. A myth is a ready-made heuristic: simple, memorable, requiring no effort.

Checking a fact is difficult. Believing a myth is easy. Algorithms show you myths because they work faster and keep you in the app longer than neutral information.

Narrative Coherence

A myth doesn't have to be true—it has to be coherent. If a story explains events, predicts the future, and gives the listener a role, it seems truthful.

A well-told lie is more convincing than a poorly told truth. Algorithms optimize precisely for narrative coherence, not for truthfulness.

Platforms reward content that creates a clear picture of the world: enemy, victim, hero. This is the archetypal structure of myth, and it works regardless of facts.

Verification Protocol: How to Recognize Exploitation

  1. Does the content trigger strong emotion (fear, anger, outrage) without providing verifiable facts?
  2. Does it confirm your group identity or hostility toward another group?
  3. Does it offer a simple explanation for a complex phenomenon?
  4. Is it spreading in a filter bubble (do you see opposing views)?
  5. Can you find the original source or is it only quotes and retellings?

If the answer is "yes" to 3+ points—the content is exploiting cognitive mechanisms. This doesn't mean it's false, but it does mean it requires additional verification.

Digital myth is not a conspiracy of platforms. It's the natural result of the meeting between evolutionary psychology and algorithmic optimization. Understanding the mechanisms is the first step toward immunity.

⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The position on the omnipotence of algorithms in shaping collective narratives contains several vulnerabilities. Below are mechanisms that weaken this argumentation.

Overestimation of Algorithmic Determinism

The article may create an impression of complete algorithmic control over collective narratives, ignoring user agency and their capacity for critical thinking. Research shows that people are active interpreters of content, capable of resisting algorithmic recommendations, rather than passive recipients.

Insufficient Empirical Foundation

Most claims about the long-term influence of algorithms on collective memory are based on observational studies and theoretical models, rather than controlled experiments. Causal relationships may be significantly weaker than presented in the analysis.

Ignoring Positive Effects

The focus on manipulation and myths overshadows the fact that algorithms democratize access to information and help marginalized groups find communities. They can also amplify prosocial narratives—from civil rights movements to mutual aid initiatives.

Technological Determinism

The article may unintentionally support a narrative that technology determines social processes, whereas algorithms are embedded in broader economic, political, and cultural contexts. Criticism should be directed at business models and regulatory failures, not just at technology per se.

Risk of Moral Panic

Excessive emphasis on the dangers of digital myths may lead to technophobia and calls for censorship, which are potentially more harmful than the myths themselves. A balance is needed between criticism and recognition of the complexity of digital ecosystems.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

It's a metaphor describing the process of forming shared narratives and "truths" through algorithmic systems of social networks and recommendation platforms. Unlike Jung's collective unconscious (archetypes passed through generations), the digital "unconscious" is artificially created: algorithms select, amplify, and distribute certain ideas, creating an illusion of spontaneous consensus. Research on collective memory in the digital age shows that digital tools don't simply store information but actively construct what becomes the "shared memory" of communities (S003).
Through three main mechanisms: selective visibility (the algorithm shows content matching previous preferences), social reinforcement (popular content gets more exposure), and the echo chamber effect (users see predominantly confirming information). These mechanisms transform private opinions into "accepted truths." For example, TikTok's algorithm can turn a local urban legend into a global trend with millions of views in 48 hours, creating the sense that "everyone knows about this." Trace data allows tracking how collective memory forms inside and outside digital space (S003).
In speed of spread, scale of reach, and creation mechanism. Traditional myths formed over centuries through oral transmission and cultural practices, reflecting archetypal patterns of human experience. Digital myths are created in days or hours, spread to billions simultaneously, and are often constructed not spontaneously but through algorithmic curation. Additionally, traditional myths were relatively stable, while digital myths are volatile: today's "accepted fact" can be forgotten or debunked by a new trend tomorrow.
Partially yes, but not in the sense of malicious conspiracy. Algorithms are optimized for engagement metrics: watch time, likes, comments, shares. Content triggering strong emotions (anger, fear, excitement) gets priority because it holds attention. This creates systematic distortion: not the most accurate or important information becomes visible, but the most "engaging." Research shows that the use of machine learning and AI in digital systems is still in early stages, and many effects are insufficiently studied (S006). Manipulation occurs not through direct thought control but through choice architecture: what you see determines what you believe.
Yes, through cognitive hygiene and conscious content consumption. Key strategies: source diversification (read beyond your feed), fact-checking original sources (don't trust screenshots and retellings), awareness of your own triggers (what content evokes strong emotions and why), using control tools (disabling recommendations, screen time limits). It's important to understand that complete protection is impossible—algorithms adapt faster than individual defense strategies. But critical thinking and metacognitive awareness (understanding how you think) significantly reduce vulnerability.
Main ones: confirmation bias (we seek information confirming our beliefs), availability heuristic (we consider frequent what's easily recalled), bandwagon effect (we believe what many believe), illusion of truth (repeated information seems more credible). Algorithms exploit these biases: they show content you're already inclined to accept, repeat the same narratives through different sources, create the appearance of mass consensus through like and view counters.
Yes, but the evidence base is moderate. Research on collective memory in the digital age shows that digital tools and trace data open new possibilities for studying collective memory formation (S003). However, most studies are observational rather than experimental (for ethical reasons, it's difficult to conduct controlled experiments manipulating the memory of millions). There's data showing that algorithmic curation of news feeds influences perception of event importance, but long-term effects on collective memory are insufficiently studied. The use of AI and machine learning in digital systems is still in early stages (S006).
Due to a combination of cognitive, social, and technological factors. Cognitively: our brains evolved for quick decisions under limited information, so we rely on heuristics (simplified rules) and trust social consensus. Socially: we want to belong to a group, and accepting shared narratives is a way to signal belonging. Technologically: algorithms create an illusion of objectivity ("a computer shows this, so it's true") and scale ("a million views can't be wrong"). Additionally, digital platforms lower barriers to information spread: anyone can become a source of "truth," and distinguishing expert opinion from amateur becomes harder.
Classic examples: the 5G and COVID-19 myth (algorithms amplified conspiracy theories through YouTube's recommendation system), the "toxic positivity" myth (a concept that went viral through TikTok and became a simplified narrative), the "quiet quitting" myth (a term created on social media and picked up by mainstream media as a global trend, though it describes ordinary behavior). These myths share one thing: they emerged or intensified through algorithmic distribution, created a sense of mass phenomenon, and influenced real human behavior (for example, the labor market in the case of quiet quitting).
Use a five-step protocol: 1) Find the original source—who first said this and based on what data? 2) Check temporal dynamics—how quickly did the information spread? Viral spread in 24-48 hours is a red flag. 3) Assess emotional load—if information triggers strong anger, fear, or excitement, this may indicate algorithmic optimization. 4) Seek alternative sources—what do experts outside your information bubble say? 5) Check metrics—high view counts don't equal credibility. If information doesn't pass at least three of five checks, treat it skeptically.
Yes, in limited cases. Some digital narratives function as social glue: they help people cope with uncertainty, create a sense of community, and mobilize collective action (e.g., environmental movements organized through social media). The problem isn't the existence of shared narratives themselves, but their opacity and manipulative nature. If a myth is created consciously, with understanding of influence mechanisms, and serves prosocial goals (e.g., vaccination campaigns), it can be a tool for positive change. Danger arises when myths are created unintentionally or used for commercial/political manipulation without informed audience consent.
Three scenarios are likely. Optimistic: development of AI tools for information verification and personalized filtering will allow people to better control their information flows. Neutral: status quo will persist—algorithms will become more complex, but protection methods will develop in parallel. Pessimistic: generative AI (like GPT-4 and beyond) will make creating convincing fake narratives so cheap and scalable that the distinction between "real" and "constructed" collective memory will disappear. Research shows that AI use in digital systems is still in early stages, and future challenges will require ongoing collaborative research and improvements (S006). The most likely outcome is a mixed scenario with increased polarization between those who possess cognitive defense tools and those who remain vulnerable.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
// SOURCES
[01] Confronting the Challenges of Participatory Culture: Media Education for the 21st Century[02] Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT[03] Discriminating Data[04] Mismeasured Variables in Econometric Analysis: Problems from the Right and Problems from the Left[05] Delivering public value through open government data initiatives in a Smart City context[06] The Rooting of the Mind in the Body: New Links Between Attachment Theory and Psychoanalytic Thought[07] Bastard Culture! How User Participation Transforms Cultural Production[08] Moneylab Reader: An Intervention in Digital Economy

💬Comments(0)

💭

No comments yet