Verdict
Unproven

Users develop 'algorithmic folk theories' — informal explanations of how opaque systems work that guide their behavior and resistance strategies

cognitive-biasesL22026-02-09T00:00:00.000Z
🔬

Analysis

  • Claim: Users develop "folk theories" of algorithms—informal explanations of how opaque systems work that guide their behavior and resistance strategies
  • Verdict: PARTIALLY TRUE
  • Evidence Level: L2—multiple scientific studies confirm the existence of algorithmic folk theories, but their accuracy and behavioral influence vary significantly
  • Key Anomaly: Folk theories are often technically inaccurate yet demonstrate sophisticated understanding of platform business models and structural incentives, challenging traditional expert/user knowledge boundaries
  • 30-Second Check: Research on TikTok, Tinder, and other platforms documents that users do create informal theories about algorithm operations (S004, S010), but these theories exist on a spectrum from accurate to erroneous, and their behavioral impact is mediated by multiple factors

Steelman—What Proponents Claim

The concept of "algorithmic folk theories" represents an academic attempt to legitimize user knowledge about opaque technological systems. According to this framework, users of social media, dating apps, and other algorithmically-governed platforms develop intuitive, informal explanations for how these systems operate (S004, S007).

Karizat and colleagues' (2021) landmark study of TikTok demonstrates that users don't passively consume content but actively theorize about how the algorithm determines visibility, recommendations, and identity (S004, S009). These theories shape content creator behavior: they experiment with hashtags, posting times, and video formats, attempting to "game" or "optimize" the algorithm.

Proponents argue folk theories serve multiple functions (S001, S008):

  • Cognitive function: Help users make sense of unpredictable system behavior under conditions of information asymmetry
  • Practical function: Guide interaction strategies with platforms, from content optimization to resistance against unwanted effects
  • Social function: Create shared understanding frameworks within user communities, forming collective knowledge
  • Political function: Serve as basis for platform critique and transparency demands

A particularly important argument is that folk theories can reveal real algorithmic system problems invisible to developers or researchers. Are's (2024) commentary emphasizes that academia risks reproducing content moderation inequalities if it dismisses user reports of hidden algorithmic processes (shadowbanning, malicious flagging) as "mere perception" (S001, S008).

What the Evidence Actually Shows

Empirical research confirms the existence of algorithmic folk theories but with substantial caveats regarding their accuracy, prevalence, and influence.

Documented Examples of Folk Theories

The Tinder study (S010) identified a novel "conflict of interest" folk theory: users believe the platform deliberately withholds optimal matches to prolong app usage. Analysis of 7,043 reviews and 30 interviews revealed users suspect three mechanisms: (a) throttling profile visibility, (b) manipulating matches, (c) recommending mismatched profiles (S010).

This isn't mere paranoid fantasy—the theory reflects understanding of a fundamental contradiction between the platform's stated goal (help find partners) and business model (maximize usage time and subscriptions). Users developed counter-strategies: counter-intuitive behavior (deliberately violating presumed algorithm rules) and location-based filtering for variety and safety (S010).

Research on Chinese social media (S002) documents folk theories about boosting content popularity. Users developed sophisticated representations of how algorithms evaluate engagement and adapted resistance strategies based on these theories (S002, S010).

The Accuracy Problem

Critically, folk theories are often technically inaccurate. A systematic review of algorithmic decision-making in organizations (90 articles, PRISMA methodology) found that user perceptions of algorithms are mediated by multiple factors: perceived fairness, trust, role ambiguity, interpretive labor, and reductionism (S009).

Research on perceptions of algorithmic discrimination in personalized recommendations (S011, S012) showed different users develop contradictory folk theories to explain identical phenomena. This indicates folk theories reflect users' cognitive biases and social context more than actual algorithm operations.

Power Dynamics and Resistance

A systematic review of content creation in algorithmic environments (S007) identified four themes: (1) market rationality underlying visibility, (2) power dislocation through folk theories, (3) neo-normative control through algorithms, (4) subversion of beatific platform fantasies.

Key finding: algorithms maintain dominance over creators through dynamic power relations, while neoliberal fantasies justify and support algorithmic power (S007). Folk theories can dislocate power but also perpetuate misunderstandings—users may expend effort on ineffective strategies or avoid actions that would be effective.

Methodological Limitations

Folk theory research faces a fundamental problem: impossibility of verifying user theories without access to actual algorithms that platforms keep secret. This creates an epistemological impasse—we can document that users believe X but cannot determine whether X is true (S001, S008).

Are's (2024) commentary argues academic peer review can inadvertently reinforce this inequality by demanding "rigorous" evidence for phenomena that are by definition hidden from researchers (S001). This parallels victim-blaming in violence cases—platforms benefit from discrediting user reports.

Conflicts and Uncertainties

Transparency-Opacity Tension

The Integrative Tension Alignment (ITA) Framework from a systematic review of 90 articles identifies transparency ↔ opacity as one of four fundamental tensions in algorithmic decision-making (S009). Transparency is not a simple solution to the folk theory problem:

  • Excessive transparency can overwhelm users or reveal competitive advantages
  • Partial transparency can create illusion of understanding, reinforcing erroneous folk theories
  • Strategic opacity may be necessary to prevent manipulation (e.g., spam filters)

Trust emerges from multiple factors including perceived fairness, autonomy, and interest alignment, not transparency alone (S009).

Authenticity Versus Optimization

A series of five pre-registered experiments (N=654, 555, 631, 177, 526) showed people prescribe intuition for authenticity-critical decisions even when algorithmic or deliberative approaches would be more effective (S003). Intuitive choice signals commitment and genuine preferences; deliberative choice signals different qualities (S003).

This explains why users resist algorithmic decision-making even when computationally superior—decision mode communicates social meaning beyond computational efficiency. Folk theories may serve as a way to preserve feelings of authenticity and agency in algorithmically-mediated environments.

The Validation Gap

Systematic reviews across algorithmic application domains reveal a persistent gap between laboratory performance and real-world validation:

  • Review of machine learning applications to diffuse reflectance spectroscopy (77 studies, PRISMA) found insufficient in-vivo validation and limited sample stratification (S005)
  • Review of driver attention detection (50 studies) showed focus on laboratory conditions with inadequate real-world testing (S006)

This validation gap is critical for understanding folk theories—users interact with algorithms in complex, dynamic, real-world contexts that may differ substantially from conditions where algorithms were developed and tested.

Algorithmic Harm Taxonomy

The Tinder study identified perceived harms: damaged self-esteem, sabotaged relationships, encouraged antisocial behavior, identity misrepresentation and marginalization (S010). A review of sociotechnical harms of algorithmic systems (S016, S017) offers a broader taxonomy for harm reduction.

The HiTOP (Hierarchical Taxonomy of Psychopathology) critique warns of algorithmic bias against underrepresented groups in clinical settings (S002 from notes.md). Factor-analytic approaches (like-goes-with-like grouping) don't necessarily reflect etiological reality—whales and sharks may cluster together based on observable features while having fundamentally different biological origins.

This warning applies to algorithmic systems generally: statistical clustering ≠ causal understanding. Folk theories may be closer to causal truth than formal models if they account for contextual factors and incentives that algorithms ignore.

Interpretation Risks

Risk 1: Romanticizing User Knowledge

While valuing participant expertise is important (S001), there's risk in uncritically accepting all folk theories as equally valid. Some folk theories may result from cognitive biases (illusion of control, confirmation bias, apophenia—seeing patterns in random data).

Research on information homogeneity showed folk theories vary between users and contexts (S006), indicating subjectivity. Not all user theories are equally informative or accurate.

Risk 2: Ignoring Structural Constraints

Focus on individual folk theories and resistance strategies may distract from structural problems of algorithmic power. Even if users develop sophisticated counter-strategies, fundamental power asymmetry between platforms and users remains (S007, S010).

Platforms can adapt algorithms in response to user strategies, creating an endless arms race where platforms have structural advantage (control over code, data, and infrastructure).

Risk 3: Underestimating Algorithm Complexity

Modern machine learning algorithms, especially deep neural networks, can be opaque even to their creators. Color harmony research showed linear models outperformed nonlinear ones (63.9% accuracy) with well-designed features (S001 from notes.md), but this is the exception,

💡

Examples

TikTok Users and Recommendation Algorithm Theories

Content creators on TikTok have developed informal theories about how the platform's algorithm works, such as posting at certain times or using trending sounds increases reach. These 'folk theories' influence their content creation strategies and attempts to 'game' the system for greater visibility. To verify, one can examine scientific research on algorithmic folk theories (e.g., Karizat et al., 2021) and compare them with TikTok's official statements about how the algorithm works. It's also useful to analyze creator forums where they share their observations and strategies.

Instagram and Shadow Ban Theories

Many Instagram users believe in the existence of a 'shadow ban' — a hidden restriction on content visibility for violating implicit rules. This folk theory has led to the development of resistance strategies: avoiding certain hashtags, limiting posting frequency, and using 'safe' wording. To verify, one should examine Instagram's official documentation on content moderation and compare it with research on users' perceptions of algorithms. It's important to note that Instagram officially denies the existence of shadow banning, but acknowledges limiting reach for community guidelines violations.

YouTube and Content Monetization Theories

YouTube creators have developed numerous folk theories about which words and topics lead to video demonetization, often using euphemisms like 'unalive' instead of 'death'. These theories shape self-censorship strategies and content adaptation, even when the exact algorithm rules are unknown. Verification can be done through analyzing YouTube's official monetization policies and comparing them with academic research on creator practices. Studies show that creators often overestimate the algorithm's strictness, leading to excessive self-censorship.

🚩

Red Flags

  • Приравнивает субъективные ощущения пользователей к объективным фактам работы алгоритмов без верификации
  • Игнорирует, что пользователи часто путают корреляцию (алгоритм показал X) с причиной (алгоритм сделал это намеренно)
  • Выдаёт редкие анекдотичные примеры за системный паттерн, не приводя статистику распространённости
  • Не различает между точностью народной теории и её влиянием на поведение — смешивает два разных вопроса
  • Цитирует исследования о существовании теорий, но не о том, насколько они направляют реальные стратегии сопротивления
  • Предполагает единую «народную теорию», хотя пользователи разных платформ и демографий развивают противоречивые объяснения
  • Не контролирует альтернативное объяснение: пользователи просто адаптируются к видимым паттернам, без моделирования внутреннего механизма
🛡️

Countermeasures

  • Проанализируйте корпус постов Reddit/Twitter за 24 месяца: выделите народные теории алгоритмов и сопоставьте с официальной документацией платформ на предмет совпадений/расхождений
  • Проведите A/B тест: разделите пользователей на две группы — одной дайте точное объяснение алгоритма, другой оставьте в неведении, измерьте изменение поведения через метрики engagement
  • Интервьюируйте 50+ power users разных платформ: запросите их гипотезы алгоритмов и проверьте их предсказательную способность против реальных данных аналитики
  • Сравните точность народных теорий с результатами reverse-engineering от независимых исследователей (Algotransparency, NewsGuard) — выявите, какие элементы пользователи угадывают верно
  • Отследите эволюцию народных теорий во времени: используйте инструменты Google Trends и Archive.org для выявления корреляции между обновлениями алгоритмов и изменением нарративов
  • Проверьте каузальность: применив Granger causality test, определите, влияют ли народные теории на поведение или поведение генерирует теории постфактум
  • Сопоставьте демографические данные: выявите, отличаются ли народные теории по образованию, возрасту, техграмотности — это покажет, насколько они результат дефицита информации или универсальной когнитивной ошибки
Level: L2
Category: cognitive-biases
Author: AI-CORE LAPLACE
#algorithms#social-media#user-behavior#platform-power#digital-literacy#resistance-strategies#transparency