Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /Critical Thinking
  3. /Mental Errors
  4. /Cognitive Biases
  5. /The Dunning-Kruger Effect: Why the Popul...
📁 Cognitive Biases
⚠️Ambiguous / Hypothesis

The Dunning-Kruger Effect: Why the Popular Interpretation "The Stupid Are Overconfident" Is Itself a Cognitive Bias

The Dunning-Kruger effect became a meme about incompetent people overestimating themselves while experts remain humble. But the original 1999 study showed something different: everyone overestimates themselves at low competence levels, the unskilled just do it more. The popular interpretation ignores statistical artifacts, regression to the mean, and methodological limitations. We examine how a scientific phenomenon turned into a cognitive weapon for intellectual arrogance—and what the data actually says about metacognition and self-assessment of competence.

📅
Published: February 18, 2026
⏱️
Reading time: 12 min

Neural Analysis

Neural Analysis
  • Topic: The Dunning-Kruger Effect — the gap between popular interpretation and the original 1999 study data
  • Epistemic Status: High confidence that the popular version is distorted; moderate confidence in alternative explanations (statistical artifacts, regression to the mean)
  • Evidence Level: Original study — small sample of students, specific tasks; criticism based on methodological analysis and replications with contradictory results
  • Verdict: The Dunning-Kruger Effect exists as a metacognitive calibration phenomenon, but not in the form of "the ignorant are confident, the knowledgeable doubt." All groups overestimate themselves when lacking skills, the difference is in degree. The popular interpretation is an oversimplification that ignores statistical effects and context.
  • Key Anomaly: Substitution of "everyone errs in self-assessment, but differently" with "only the incompetent don't know about their incompetence" — a logical leap not supported by the data
  • Check in 30 sec: Find the graph from the original Kruger & Dunning (1999) article — you'll see that ALL quartiles overestimate themselves at low skill levels, not just the bottom one
Level1
XP0
🖤
The Dunning-Kruger effect has become one of the most cited psychological phenomena in internet discussions — and one of the most distorted. The popular interpretation states: incompetent people are overconfident and overestimate themselves, while true experts are modest and full of doubt. This formula has become a cognitive weapon for intellectual arrogance, allowing anyone to accuse an opponent of "classic Dunning-Kruger." But the original 1999 study showed something different — and the popular interpretation itself demonstrates precisely the cognitive biases it claims to expose.

📌What Dunning and Kruger actually discovered — and how it became a meme about stupidity

The original study by Justin Kruger and David Dunning, published in the Journal of Personality and Social Psychology in 1999, examined metacognitive abilities — that is, people's ability to assess their own competence. Participants took tests on logical reasoning, grammar, and sense of humor, then evaluated their own performance. More details in the Scientific Method section.

Key observation: people with low scores systematically overestimated their performance, while people with high scores slightly underestimated themselves.

🔎 Original data: everyone overestimates themselves, but differently

A critically important nuance lost in popular retellings: in the Dunning-Kruger study, ALL participant groups overestimated their performance compared to objective results.

Participant group Actual result Self-assessment Magnitude of overestimation
Bottom quartile (worst 25%) ~25th percentile ~60th percentile +35 points
Top quartile (best 25%) ~87th percentile ~75th percentile −12 points

The difference lay in the magnitude of overestimation, not in its presence or absence.

⚠️ How a scientific phenomenon became a weapon in arguments

Popular culture transformed this data into a binary model: "stupid people are overconfident, smart people are modest." This simplified version became a meme, allowing people to discredit opponents without analyzing their arguments.

The phrase "that's classic Dunning-Kruger" has become a rhetorical device that paradoxically demonstrates precisely the metacognitive blindness it claims to expose: the speaker is so confident in their superiority that they don't verify what the study actually showed.

🧩 The boundary between science and interpretation

Dunning and Kruger themselves never claimed that incompetent people are uniquely prone to overconfidence. Their hypothesis was more nuanced: lack of competence in a particular domain correlates with lack of metacognitive skills to assess that competence.

"Double burden"
A person not only performs a task poorly but also cannot accurately assess the quality of their performance. However, this formulation does not imply that competent people possess perfect self-assessment or that incompetent people are always maximally overconfident.

The popular interpretation commits a logical error: it transforms the correlation between competence and accuracy of self-assessment into a causal relationship, where low competence supposedly causes high overconfidence. In reality, both phenomena are linked to a third variable — metacognitive calibration, which develops independently.

Visualization of the gap between perceived and actual competence across different performance quartiles
Graphical representation of the original Dunning-Kruger data shows that all groups overestimate themselves, but the magnitude of overestimation decreases with increasing competence — this is not a story about "stupid overconfident" and "smart modest" people

🧱Five arguments that support the popular interpretation — and why they seem convincing

The popular version of the Dunning-Kruger effect persists not because it's correct, but because it relies on real observations and psychological mechanisms. Let's examine which ones. More details in the Media Literacy section.

🎯 First argument: everyday experience confirms the pattern

Everyone can recall a colleague or acquaintance who demonstrated unwarranted confidence in their abilities. This creates a sense of validity: "I've seen it with my own eyes."

The problem: anecdotal observations are subject to confirmation bias and availability heuristic. We remember vivid cases of mismatch between competence and confidence, but don't notice thousands of cases where the correlation is absent or reversed.

  1. Vivid case: incompetent person is confident → remembered
  2. Ordinary case: incompetent person has doubts → unnoticed
  3. Result: distorted sample in memory

🎯 Second argument: evolutionary logic supports the hypothesis

Assessing one's own competence requires the same cognitive resources as the competence itself. If a person lacks a skill, they cannot evaluate the quality of performing that skill. The logic seems self-evident.

However, evolutionary plausibility does not equal empirical proof. Many intuitively appealing hypotheses don't withstand rigorous testing.

🎯 Third argument: replications confirm the basic pattern

Multiple studies have reproduced the basic pattern: people with low performance overestimate themselves more than people with high performance (S001, S005). Replications have been conducted across different domains — from medical diagnosis to driving.

The critical question is not whether the pattern replicates, but what causes it — a real psychological mechanism or a statistical artifact.

🎯 Fourth argument: the effect aligns with other cognitive biases

The Dunning-Kruger effect resonates with overconfidence effect, illusory superiority, and self-serving bias. This conceptual coherence creates a sense that the effect is part of a valid theoretical framework.

Bias Essence Why it seems related
Overconfidence People overestimate the accuracy of their knowledge Low-competence people overestimate themselves
Illusory superiority People consider themselves above average Incompetent people consider themselves competent
Self-serving bias We attribute successes to ourselves, failures to circumstances Low-competence people don't see their mistakes

However, coherence with other concepts doesn't guarantee that the effect itself is interpreted correctly.

🎯 Fifth argument: source authority and academic publication

The study was published in a prestigious peer-reviewed journal, the authors are respected psychologists from Cornell University, and the work has been cited thousands of times (S001). This academic authority creates a presumption of reliability.

For most people, source authority serves as a quality heuristic. However, even prestigious publications can contain methodological limitations that become apparent only upon careful analysis.

Why authority works as a heuristic
Checking methodology requires time and expertise; authority is a quick signal of reliability
Why this is dangerous
Limitations of the original study may be misinterpreted in popularization
What happens
Each citation reinforces the impression of validity, even if citing authors haven't verified the original data

🔬Statistical Artifacts and Regression to the Mean — What the Data Actually Shows

The Dunning-Kruger effect may be a statistical artifact, not a psychological one. Three mechanisms create the illusion of a pattern without any specific cognitive bias. More details in the Mental Errors section.

Regression to the Mean

Extreme values in one measurement tend toward the average in another — this is pure mathematics, not psychology. When a low-competence person happens to get a low score (partly due to bad luck), their self-assessment, which doesn't contain that same noise, appears inflated. A high-competence person, conversely, may have achieved a high score partly through luck — and their self-assessment appears deflated.

Regression to the mean creates a pattern identical to the Dunning-Kruger effect, even if the true correlation between competence and metacognitive accuracy is zero.

Measurement Noise and Scale Asymmetry

Any measurement contains random error. When we compare self-assessment with objective performance, both variables contain noise independent of each other.

Add to this the constraints of the scale: someone in the 5th percentile cannot underestimate themselves by more than 5 points, but can overestimate by 95. Someone in the 95th percentile — the opposite. This mathematical asymmetry systematically biases lower groups toward overestimation, upper groups toward underestimation.

Source of Bias Mechanism Result
Regression to the mean Extreme values contain more noise Low scores appear overestimated, high scores appear underestimated
Scale asymmetry Lower end of scale has less "room" for underestimation Lower groups systematically overestimate themselves mathematically
Independent noise in measurements Self-assessment and test contain different errors Mismatch appears as systematic bias

What Data Reanalysis Shows

When researchers applied corrections for regression to the mean and measurement noise to the original Dunning-Kruger data, the effect size substantially decreased or disappeared entirely (S001). Some models show that the observed pattern is fully explained by a combination of three artifacts: regression to the mean, ceiling/floor effects, and a general tendency toward moderate self-overestimation among all participants regardless of competence.

This doesn't mean people don't make errors in self-assessment — they do. But the error isn't specific to the low-competent: it's universal and explained by statistics, not psychology.

If an effect disappears with statistical correction, it means we were observing a methodological artifact, not a real psychological phenomenon.

The connection between this and base rate neglect is profound: both errors arise when we fail to account for the statistical constraints of data. People often interpret correlations as causality, not noticing that the very structure of measurements creates the illusion of a pattern.

Comparison of real data with simulation based only on statistical artifacts without psychological mechanism
When researchers model data accounting only for regression to the mean and measurement noise, they get a pattern indistinguishable from the "Dunning-Kruger effect" — this calls the psychological interpretation into question

🧠Metacognitive Calibration Against Popular Myth — What Later Research Shows

Over two and a half decades since the original study's publication, a significant body of data has accumulated on metacognitive calibration — people's ability to accurately assess their competence. More details in the Cognitive Biases section.

🧬 Meta-Analyses Show a More Complex Picture

Meta-analyses of metacognitive accuracy research show that a correlation between competence and self-assessment accuracy exists, but it's weak to moderate (typically r = 0.2-0.4). This means competence explains only 4-16% of the variation in self-assessment accuracy.

Most variation is determined by other factors: personality traits, motivation, task context, cultural norms. Moreover, the direction of the relationship doesn't always match the popular interpretation: in some domains, more competent people demonstrate greater self-overestimation, especially when the task relates to their professional identity (S006).

Competence explains only 4–16% of variation in self-assessment. The rest — personality, motivation, context, culture.

🧬 The Role of Feedback and Training

The original Dunning-Kruger study included an important component: when participants with low scores received brief training, their metacognitive accuracy improved. However, later research showed that feedback improves calibration across all groups, not just the low-competent (S007).

Furthermore, the training effect is often explained simply by providing information about the distribution of results, rather than developing metacognitive skills. This means the improvement mechanism isn't correcting a comprehension deficit, but changing available information.

🧬 Cross-Cultural Differences Challenge Universality

Research in non-Western cultures shows substantial differences in self-assessment patterns. In cultures with high collectivism and modesty as a social norm (for example, in East Asia), the pattern is often reversed: more competent people demonstrate greater self-underestimation, while less competent people show more accurate calibration.

This indicates that observed patterns are largely determined by cultural norms of self-presentation, rather than a universal cognitive mechanism. If the effect were biological, it should manifest identically everywhere.

  1. Western cultures: low-competent overestimate themselves
  2. Eastern cultures: high-competent underestimate themselves
  3. Conclusion: cultural norms, not a universal mechanism

🔁 Domain Specificity Versus General Mechanism

If the Dunning-Kruger effect reflects a fundamental cognitive mechanism, it should manifest consistently across different domains. However, research shows high domain specificity: a person may be well-calibrated in assessing their mathematical abilities but poorly calibrated in assessing social skills (S001).

Calibration depends on task type: people assess themselves better in tasks with clear success criteria and worse in tasks with subjective or multiple criteria. This specificity is poorly consistent with the idea of a general metacognitive deficit in incompetent people.

Task Type Success Criteria Calibration
Mathematics, logic Clear, objective Good
Social skills Subjective, multiple Poor
Creativity Ambiguous Unpredictable

⚙️Causality, Correlation, and Third Variables — Why the Link Between Competence and Self-Assessment Isn't So Simple

Even if a correlation between competence and metacognitive accuracy exists, the question of causality remains open. Correlation is not causation, and here lie at least three alternative explanations. More details in the section Psychology of Belief.

🔁 Reverse Causality: Confidence May Precede Competence

The popular interpretation assumes: incompetence → self-overestimation. But the arrow may point the other way.

People who are initially confident are more likely to take on challenging tasks, get more practice, and become more competent. Confidence here is not a consequence of incompetence, but a predictor of future competence. Longitudinal studies show: baseline self-confidence predicts skill growth better than initial skill level predicts changes in confidence.

If confidence drives action, and action creates competence, then the correlation between them is the result of a causal chain, not proof that incompetent people are overconfident.

🔁 Third Variables: Personality, Motivation, Context

Multiple factors simultaneously influence both competence and self-assessment, creating spurious correlation.

Variable Impact on Competence Impact on Self-Assessment Result
Narcissism Weak (may reduce learning ability) Strong (inflates regardless of facts) Correlation without causation
Achievement motivation Strong (more practice → higher skills) Strong (higher self-assessment standards) Both grow together
Anxiety Weak (may reduce performance) Strong (underestimation even with high skills) Inverse correlation
Social context Moderate (affects practice opportunities) Strong (determines self-presentation norms) Contextual correlation

All these factors create an apparent link between competence and self-assessment without a direct causal arrow between them.

🔁 The Problem of Operationalizing Competence

In the original study, competence was operationalized as performance on one test at one point in time. But is this a valid measure of true competence?

Tests measure specific knowledge
A person may perform poorly on a particular test yet possess high competence in real-world conditions within the domain. The test doesn't reflect the broader range of skills.
Situational factors distort results
Anxiety, fatigue, misunderstanding instructions — all reduce performance independent of actual skills. We're measuring performance at a specific moment, not competence as such.
Self-assessment may be calibrated to a different standard
A person may honestly assess themselves relative to their own progress or relative to their community, rather than relative to the test. Misalignment of standards creates the appearance of overestimation.

When we say "incompetent people overestimate themselves," we assume the test accurately measures competence. But this itself requires proof, which is often absent. Ignoring base rates is particularly dangerous here: we forget that even a valid test has limitations in generalizability.

The result: the correlation we see in the data may be an artifact of how we measure competence, rather than a reflection of a real psychological pattern.

🕳️Cognitive Anatomy of the Myth — Which Biases Make the Popular Version So Appealing

The popular interpretation of the Dunning-Kruger effect is itself an example of several cognitive biases that make it resistant to correction. More details in the Levels and Achievements section.

⚠️ Confirmation Bias and Selective Attention

People who believe in the popular version notice and remember cases that confirm it, and ignore contradictory ones. An incompetent person with confidence — confirmation of the effect. An incompetent person with uncertainty or a competent person with overconfidence — explained by special circumstances.

This selectivity creates an illusion of pattern universality. The mechanism works like confirmation bias: the brain filters reality to fit a predetermined conclusion.

⚠️ Fundamental Attribution Error and Ignoring Situational Factors

The popular interpretation attributes self-overestimation to internal characteristics (incompetence), ignoring situational factors. A person may appear overconfident because the social context demands a display of confidence (job interview), or they lack access to information about standards, or they use different evaluation criteria.

Focusing on dispositional explanations while ignoring situational ones is the classic fundamental attribution error. It's the same bias we apply when judging other people.

⚠️ Illusion of Asymmetric Insight

People who use the Dunning-Kruger effect as an argument apply it to others, but not to themselves. This is the illusion of asymmetric insight — the belief that we understand others better than they understand themselves.

When someone says "you have classic Dunning-Kruger," they implicitly claim to possess the metacognitive clarity to diagnose someone else's blindness. This asymmetry is rarely subjected to reflection.

🧩 Halo Effect and Oversimplification of Complexity

The popular version is attractive in its simplicity: one variable (competence) predicts another (metacognitive accuracy). This simplicity creates a halo effect — the feeling that the explanation is elegant and therefore true.

  1. Reality of metacognitive calibration: multiple variables
  2. Nonlinear interactions between factors
  3. Domain specificity (different areas require different assessment skills)
  4. Cultural differences in self-evaluation standards
  5. Statistical artifacts that create the appearance of an effect

Complexity is less cognitively appealing, and the simplified version displaces the nuanced one. This is an example of how the availability heuristic works at the level of ideas: a simple explanation is more accessible to memory and therefore seems more true.

🛡️Verification Protocol: Seven Questions That Expose Misapplication of the Dunning-Kruger Effect

When someone references the Dunning-Kruger effect in a discussion, the following questions help assess whether this application is justified or merely a rhetorical device.

  1. Is competence defined objectively and independently? Valid application of the effect requires independent objective measurement of competence. If competence is defined subjectively or circularly (e.g., "he's incompetent because I disagree with him"), the reference to the Dunning-Kruger effect is invalid. What specific test or measurement was used? What are its psychometric properties?
  2. Is self-assessment measured systematically? The Dunning-Kruger effect concerns systematic bias in self-assessment of competence, not merely high confidence. A subjective impression of someone's overconfidence is not data. Was a standardized self-assessment instrument used?
  3. Is there a correlation between competence and self-assessment in this domain? If the correlation is weak or absent, the effect doesn't apply. Check: what is the effect size? Is it statistically significant? Or could this be an artifact of regression to the mean (S001)?
  4. Were third variables controlled? Motivation, stress, cultural norms, education—all influence self-assessment. If these factors aren't accounted for, you're seeing correlation, not causation. Which variables were controlled in the study?
  5. Are the results replicable in this domain? The Dunning-Kruger effect doesn't replicate everywhere (S006). In some fields (especially highly specialized ones), the relationship between competence and self-assessment is quite different. Are there independent replications in your specific context?
  6. Is the effect being applied to a group or an individual? The effect describes group trends, not individual cases. The claim "this person is incompetent because they're overconfident" is a logical fallacy. The effect speaks to distributions, not causality for a specific person.
  7. Is the effect being used as an explanation or as a label? If the reference to the effect closes discussion instead of opening it, it's a rhetorical device. Valid application is a hypothesis to be tested, not a final verdict. Can this hypothesis be tested with data?
If the answer to most questions is "unknown" or "not measured," the reference to the Dunning-Kruger effect is not analysis—it's confirmation of one's own opinion.

The protocol works both ways: it protects against misapplication of the effect and helps recognize when the effect is genuinely relevant. When data exists, the questions become a tool for calibrating thinking, not a weapon in an argument.

⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

Criticism of the popular interpretation of the Dunning-Kruger effect is valid, but itself requires clarification. Here are areas where the argumentation may be incomplete or where counterarguments deserve attention.

Statistical Artifacts — Not a Complete Explanation

Regression to the mean and the "ceiling effect" do indeed contribute to the observed pattern, but this does not mean that metacognitive deficit in incompetent people is completely absent. Some studies with controls for these artifacts still find a residual effect.

Simplification — Not Always Distortion

Simplifying scientific concepts for mass audiences is an inevitable process, and not every simplification is a distortion. Perhaps the popular version of the Dunning-Kruger effect serves a useful function: it makes the idea of metacognitive calibration accessible and stimulates reflection about one's own competence, even if the details are imprecise.

Practical Applicability Remains Relevant

Even if the effect is partially an artifact, the phenomenon of "people poorly assess their competence in unfamiliar domains" remains real and important for education, hiring, and decision-making. Focus on methodological criticism may distract from practical conclusions.

Risk of Demotivating Self-Awareness Development

By asserting that "everyone overestimates themselves," we risk creating the impression that differences in metacognitive abilities are insignificant. This may demotivate the development of self-awareness and work on calibrating one's own assessments.

Data May Change

If future meta-analyses with more rigorous controls for artifacts confirm the robustness of the effect, the critical position will prove premature. Science develops iteratively, and current conclusions are not the final verdict.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

The Dunning-Kruger effect is a cognitive bias where people with low competence in a particular area tend to overestimate their abilities more than people with high competence. Important: this does NOT mean that only incompetent people overestimate themselves. The original study by Kruger and Dunning (1999) showed that all participants—from novices to experts—overestimated their performance when their skills were insufficient. The difference is that people in the bottom quartile did so significantly more (overestimating by 40-50 percentiles), while more competent individuals did so moderately (by 10-15 percentiles). The popular interpretation that "the stupid are confident, the smart have doubts" is an oversimplification that ignores the nuances of the data.
No, this is a distortion. The original study did not measure "stupidity" or general intelligence—it measured specific skills (logic, grammar, humor) among Cornell University students. The results showed that people with low test scores overestimated their performance more than those who scored high. But this is not a linear relationship of "stupidity = confidence." Moreover, critics point out that the observed pattern may be partially explained by a statistical artifact—regression to the mean and a "ceiling effect" for high-performing participants. The popular version has turned a subtle effect of metacognitive calibration into a caricature of "stupid overconfident people," which is itself a cognitive error.
Because it provides a simple explanation for a complex social phenomenon and allows people to feel intellectually superior. The "Dunning-Kruger effect" meme has become a weapon in arguments: accusing an opponent of being "too stupid to understand their stupidity" is a way to disqualify their position without examining their arguments. This is cognitively pleasant: if I know about the effect, then I'm not in the bottom quartile, which means I'm competent. But this is a logical fallacy: knowing about a cognitive bias does not make you immune to it. Moreover, using the Dunning-Kruger effect as an ad hominem argument is itself a form of intellectual incompetence, an irony that usually escapes users.
Yes, but with caveats. The original Kruger & Dunning (1999) study included four experiments with a total sample of about 300 students. The effect has been replicated in a number of subsequent studies across different domains (medicine, driving, financial literacy). However, methodological critics point to problems: small samples, specific tasks, lack of control for statistical artifacts. Some researchers (e.g., Gignac & Zajenkowski, 2020) have shown that a significant portion of the effect can be explained by the better-than-average effect and regression to the mean, rather than a unique metacognitive deficit of the incompetent. Consensus: the phenomenon exists, but its interpretation and magnitude remain subjects of debate.
Regression to the mean is a statistical phenomenon where extreme values in one measurement tend to be closer to the mean in repeated measurements. In the context of the Dunning-Kruger effect, this means: if someone scored very low on a test (bottom quartile), their self-assessment, even if it's random or based on a general self-concept, is likely to be closer to the mean than their actual performance. This creates an illusion of overestimation. Similarly, people with very high scores may appear "modest" because their self-assessment regresses to the mean. Critics argue that a significant portion of the Dunning-Kruger graph can be explained by this artifact rather than actual metacognitive deficit.
No, everyone overestimates themselves in areas where their skills are insufficient. The original Kruger & Dunning data show that participants in all performance quartiles overestimated their results when tasks were challenging for them. The difference is in the degree of overestimation: the bottom quartile overestimated by 40-50 percentiles, the second quartile by 20-30, the third by 10-15, and the top by 5-10 or even slightly underestimated. Key point: this is not a binary division into "incompetent overconfident" and "competent modest." It's a continuum where everyone is subject to metacognitive errors, but to varying degrees. The popular interpretation ignores this nuance.
This is tempting but methodologically incorrect. The Dunning-Kruger effect describes metacognitive calibration in specific, measurable skills (e.g., logic problems, grammar). Applying it to complex worldview systems (anti-vaccination, conspiracy theories) is an extrapolation beyond the data. These phenomena are better explained by other mechanisms: motivated reasoning, confirmation bias, epistemic distrust of institutions, social identity. Using the Dunning-Kruger effect as a universal explanation for "why people believe stupid things" is itself a form of intellectual laziness and oversimplification. Moreover, it can be counterproductive: calling opponents "too stupid to understand" closes dialogue and increases polarization.
The direct way is to get objective feedback from competent people or through standardized tests. But there are indirect markers of metacognitive calibration: (1) Compare your self-assessment with actual results in measurable tasks (exams, projects, competitions). If the gap is systematically large—that's a signal. (2) Ask yourself: can I accurately predict where I'll make mistakes? Competent people better know the boundaries of their knowledge. (3) Look for areas where you feel confident but lack formal training or practice—these are risk zones. (4) Pay attention to your reaction to criticism: if you systematically reject expert feedback, this may be a sign of metacognitive deficit. Important: we're all subject to this effect in different areas. The goal is not to avoid it completely, but to develop a habit of epistemic humility and verification.
Current research shows a mixed picture. On one hand, the effect is replicated across various domains: medical diagnosis (medical students overestimate their skills), financial literacy (people with low financial knowledge overestimate their decision-making ability), driving (most drivers consider themselves above average). On the other hand, methodological criticism is growing: studies by Gignac & Zajenkowski (2020), Nuhfer et al. (2016) show that a significant portion of the effect may be a measurement artifact. Meta-analyses do not yet provide a definitive answer about the magnitude of the "true" effect after controlling for statistical biases. Consensus: the phenomenon of metacognitive miscalibration exists, but the popular interpretation exaggerates it and ignores its complexity.
This is called "impostor syndrome" or, in the context of Dunning-Kruger, the "flip side" of the effect. The original study showed that high-performing participants sometimes slightly underestimated their results (by 5-10 percentiles). Explanations: (1) Experts better understand the complexity of tasks and the boundaries of their knowledge—this is metacognitive competence. (2) "Curse of knowledge": experts assume others know as much, and therefore consider their results "normal." (3) Social norms: in academic and professional environments, modesty is encouraged, which can influence self-assessment. (4) Statistical artifact: regression to the mean works both ways. Important: expert underestimation is usually minor compared to novice overestimation.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
// SOURCES
[01] The Dunning-Kruger Effect in a workplace computing setting[02] Why Ineffective Psychotherapies Appear to Work[03] Cognitive and academic benefits of music training with children: A multilevel meta-analysis[04] Were Persulfate-Based Advanced Oxidation Processes Really Understood? Basic Concepts, Cognitive Biases, and Experimental Details[05] Unskilled and unaware in the classroom: College students’ desired grades predict their biased grade predictions[06] Predicting biases in very highly educated samples: Numeracy and metacognition[07] Promoting Student Metacognition[08] Overconfidence in Managing Health Concerns: The Dunning–Kruger Effect and Health Literacy

💬Comments(0)

💭

No comments yet