Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /AI and Technology
  3. /AI Ethics and Safety
  4. /AI Ethics
  5. /Algorithmic Fairness: Why It's Mathemati...
📁 AI Ethics
⚠️Ambiguous / Hypothesis

Algorithmic Fairness: Why It's Mathematically Impossible to Satisfy All Criteria Simultaneously — and What This Means for AI Systems

Algorithmic fairness faces a fundamental mathematical problem: different definitions of fairness (demographic parity, equal opportunity, calibration) are incompatible with each other. Impossibility theorems prove that a system cannot simultaneously satisfy all criteria if base rates differ between groups. This is not a technical flaw, but a mathematical fact requiring deliberate priority choices when designing AI systems.

🔄
UPD: February 11, 2026
📅
Published: February 8, 2026
⏱️
Reading time: 13 min

Neural Analysis

Neural Analysis
  • Topic: Mathematical impossibility of simultaneously satisfying all algorithmic fairness criteria in AI systems
  • Epistemic status: High confidence — based on mathematical proofs of impossibility theorems and systematic reviews
  • Evidence level: Mathematical proofs, systematic methodology reviews, meta-analyses of various approaches' effectiveness
  • Verdict: Algorithmic fairness is not a technical problem that can be "solved" with better code. It's a fundamental mathematical trade-off requiring explicit ethical choices between competing definitions of fairness.
  • Key anomaly: Public discourse demands AI "be fair" without recognizing that fairness itself has mutually exclusive mathematical definitions
  • 30-second check: Ask anyone talking about "algorithmic fairness": which specific definition of fairness are they using — demographic parity, equalized odds, or predictive parity?
Level1
XP0

�� When an algorithm decides on a loan, a job application, or parole, we demand fairness from it. But what if mathematics itself proves that it's impossible to be fair to everyone simultaneously? This isn't a philosophical debate or a technical oversight—it's a fundamental theorem that shatters the illusion of universal algorithmic fairness. ��️ Every AI system claiming objectivity is actually making a hidden choice: whose fairness it protects, and whose it sacrifices.

�� What is algorithmic fairness—and why there can't be just one

Algorithmic fairness is a set of mathematical criteria that determine how impartially a system makes decisions about different groups of people. The problem begins with the fact that there isn't one, but multiple incompatible definitions of fairness, each seeming intuitively correct yet contradicting the others. More details in the Techno-Esotericism section.

Three core definitions of fairness that cannot coexist

Statistical parity (demographic parity) requires that positive decisions be distributed equally across groups: if an algorithm approves 30% of loans in group A, it must approve 30% in group B. This definition ignores differences in base rates—for example, if one group objectively has more creditworthy applicants.

Equalized odds requires that the probability of a correct positive decision (true positive rate) and the probability of a false positive decision (false positive rate) be identical for all groups. If a person is truly creditworthy, their chances of approval shouldn't depend on their group.

Calibration requires that predicted probability matches the actual frequency of the event in each group. If an algorithm assigns an applicant a 70% probability of loan repayment, then among all applicants with that score, approximately 70% should actually repay the loan.

Criterion Protects Ignores
Statistical parity Systemic discrimination at the outcome level Differences in base rates between groups
Equalized odds Individual fairness: same characteristics → same chances Overall distribution of opportunities between groups
Calibration Prediction accuracy: "70%" means exactly 70% Group differences in decision distribution
Mathematics doesn't allow satisfying all three criteria simultaneously if base rates differ between groups. This isn't a question of better algorithms or more data—it's an impossibility theorem (S001).

Each definition appeals to different moral intuitions, and each intuition is valid in its own context. But when base rates (proportion of creditworthy individuals, proportion of recidivists, proportion of qualified candidates) differ between groups, choosing one criterion automatically violates the others.

This means that the fairness of an AI system isn't an objective fact that can be "computed," but a political choice: which moral intuition you're willing to sacrifice for others (S002).

Algorithmic fairness impossibility triangle with three vertices
Three definitions of fairness form an impossibility triangle: a system can satisfy at most two criteria simultaneously if groups' base rates differ

�� Mathematical Proof of Impossibility: Hardt, Price, and Srebro Theorems

Fundamental impossibility theorems in algorithmic fairness are not empirical observations, but rigorous mathematical proofs of structural incompatibility between fairness criteria (S001). They demonstrate that under certain conditions, it's impossible to satisfy two fairness criteria simultaneously, no matter how good your algorithm is.

�� Incompatibility Theorem: Demographic Parity and Equalized Odds

Moritz Hardt, Eric Price, and Nathan Srebro proved that a binary classifier cannot simultaneously satisfy demographic parity and equalized odds if base rates of the positive class differ between groups (S001).

Demographic Parity
The algorithm produces positive decisions at equal rates across all groups: P(Ŷ=1|A=0) = P(Ŷ=1|A=1).
Equalized Odds
The algorithm makes errors equally across all groups: P(Ŷ=1|Y=1,A=0) = P(Ŷ=1|Y=1,A=1) and P(Ŷ=1|Y=0,A=0) = P(Ŷ=1|Y=0,A=1).

When base rates differ — P(Y=1|A=0) ≠ P(Y=1|A=1) — these requirements lead to contradictory equations. The only exceptions: a perfect classifier (always correct) or a completely random one (always guessing). More details in the section Myths About Conscious AI.

This isn't an algorithm bug. It's a mathematical fact: if two groups have different base rates, you cannot simultaneously produce equal proportions of positive decisions and make errors equally.

�� Incompatibility Theorem: Calibration and Equalized Odds

Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan proved an analogous result for calibration (S002). Calibration requires that if an algorithm assigns a probability of 0.7, then among all cases with that score, the actual frequency of positive outcomes should be 0.7 — separately for each group.

The theorem states: if base rates differ between groups, a calibrated classifier cannot simultaneously satisfy equalized odds (except in cases of perfect prediction).

  • Calibration requires: predictions reflect real differences in base rates between groups.
  • Equalized odds requires: ignoring these differences when making decisions.
  • Result: a fundamental contradiction, mathematically irresolvable.

�� COMPAS and ProPublica: When Theory Meets Practice

The COMPAS system assesses recidivism risk for parole decisions. A 2016 ProPublica investigation revealed asymmetry: among African Americans who did not reoffend, 44.9% were incorrectly classified as high-risk; among whites — 23.5% (S001).

Northpointe developers countered: the system is calibrated. Among all those assigned high risk, the actual recidivism rate is equal between groups. Both sides were mathematically correct — a direct consequence of impossibility theorems.

Criterion Did COMPAS satisfy? Why?
Calibration Yes Predicted probability matched actual frequency in each group
Equalized Odds No Error rates differed between groups (44.9% vs 23.5%)
Demographic Parity No Proportion of high-risk scores differed between groups

Base recidivism rates differed between groups — a fact of the data, not an algorithm error. Therefore, it was impossible to satisfy all three criteria simultaneously. The system worked as designed, but mathematics didn't allow for an ideal solution.

⚠️Five Arguments for Why the Problem Is Real and Unsolvable

Skeptics might argue that impossibility theorems are abstract mathematics. However, several strong arguments demonstrate that the problem has direct practical consequences. For more details, see the section How Artificial Intelligence Works.

�� Argument 1: The Theorems Apply to Any Algorithm, Including Neural Networks

Impossibility theorems are independent of algorithm architecture (S001). They apply to logistic regression, decision trees, neural networks, ensembles—any system that produces binary predictions or probabilities.

Improved algorithms, more data, more complex models—none of this solves the problem. As long as real differences in base rates exist between groups, the theorems remain in force.

�� Argument 2: Base Rates Differ in Most Real-World Applications

The critical condition of the theorems—differences in base rates between groups—is met in the overwhelming majority of practical AI applications (S002).

In medicine
disease prevalence varies by age, sex, and ethnicity
In lending
historical default rates differ between socioeconomic groups (S003)
In criminal justice
base rates of recidivism vary across demographic groups

These differences are often the result of historical discrimination and systemic barriers. But regardless of the causes, their existence makes the theorems applicable.

�� Argument 3: The Choice of Fairness Criterion Has Measurable Consequences

The decision about which criterion to prioritize directly affects the distribution of errors between groups.

Criterion Consequence for Low Base Rate Group Consequence for High Base Rate Group
Demographic Parity More false positive decisions More false negative decisions
Equalized Odds Disproportionate group-level outcomes Disproportionate group-level outcomes

In medical diagnosis: a false negative is a missed disease, a false positive is unnecessary treatment. In lending: a false negative denies opportunity, a false positive creates risk for the lender (S005).

�� Argument 4: Legal and Regulatory Frameworks Are Inconsistent

Different jurisdictions use different definitions of discrimination that correspond to incompatible mathematical criteria.

In the US, the "disparate impact" doctrine is close to demographic parity: disproportionate impact on a protected group may be considered discrimination, even if the algorithm doesn't directly use protected attributes. The European GDPR and AI Act emphasize individual fairness and transparency, which aligns more closely with calibration and equalized odds requirements.

A system compliant with one jurisdiction's requirements may violate another's—not due to technical flaws, but because of the mathematical incompatibility of the requirements themselves (S004).

�� Argument 5: Hidden Criterion Selection Creates an Illusion of Objectivity

Most commercial AI systems don't disclose which fairness criterion they prioritize, creating an illusion of universal objectivity. When a company claims its algorithm is "fair," that statement is meaningless without clarification: fair according to which definition?

Lack of transparency masks fundamental value judgments as technical neutrality. This is especially problematic in critical domains—criminal justice, healthcare, education—where affected individuals have no opportunity to challenge or understand what tradeoffs were made.

The mathematical impossibility of universal fairness means that every system makes a normative choice that should be explicit and subject to public deliberation.

Distribution of algorithm errors between groups under different fairness criteria
Visualizing the tradeoffs: optimizing for different fairness criteria leads to different distributions of errors between groups

�� Mechanisms That Turn Mathematical Fact Into Social Problem

Impossibility theorems describe mathematical constraints, but their social impact is mediated by specific mechanisms through which algorithmic decisions affect people's lives. Understanding these mechanisms is critical for assessing real-world consequences. More details in the Logical Fallacies section.

�� Feedback Loops Amplify Historical Inequalities

Algorithms learn from historical data that reflects existing inequalities. If a credit scoring system is trained on data where certain groups historically received fewer loans (due to discrimination or structural barriers), it reproduces these patterns.

When an algorithm makes decisions, it creates new data for retraining the model—closing a feedback loop (S002). Each choice of fairness criterion has consequences: calibration accurately predicts historical patterns (including discriminatory ones), demographic parity creates more errors in both groups, equalized odds generates disproportionate outcomes at the group level. Loops amplify these consequences over time.

Optimization Criterion Amplification Mechanism Long-Term Effect
Calibration Reproduces historical patterns accurately Discrimination becomes "predictable" and legitimate
Demographic Parity Increases errors in both groups Declining trust in system, unpredictable rejections
Equalized Odds Creates disproportionate outcomes at group level Visible inequality in outcomes, social tension

�� Proxy Variables Bypass Direct Discrimination Protections

Even if an algorithm doesn't use protected attributes (race, gender, age) directly, it uses proxy variables that strongly correlate with these attributes. ZIP code correlates with neighborhood racial composition, names may indicate ethnicity, purchase history correlates with gender.

Machine learning algorithms automatically discover these correlations and use them for predictions (S001). A formally "group-blind" system effectively makes decisions based on group membership through proxies. Impossibility theorems apply here too: if proxy variables allow distinguishing between groups, mathematical constraints on simultaneously satisfying fairness criteria remain in force.

Removing proxy variables may reduce prediction accuracy, but doesn't solve the fundamental problem of criterion incompatibility. This is a choice between visible discrimination and hidden discrimination.

�� Contextual Dependency: One Decision, Different Consequences

The same algorithmic decision has different consequences for different groups due to differences in social and economic context. A loan denial for a high-income person is an inconvenience. A denial for someone on the edge of poverty can mean inability to pay for medical care or education.

A false positive prediction of high recidivism risk for someone with strong social support can be challenged. For someone without resources, it can mean years of additional incarceration (S003). Mathematical fairness criteria operate on probabilities and error rates, but don't account for differences in severity of consequences.

  1. A system can be "fair" in the sense of equalized odds (equal error rates)
  2. But create disproportionate harm if the consequences of errors differ between groups
  3. This is a limitation of purely mathematical approaches to fairness
  4. Requires accounting for context that algorithms cannot formalize

The connection between these mechanisms and broader AI ethics issues is examined in materials on AI ethics and safety. Similar feedback loops and proxy variables operate in facial recognition systems, where historical data contains even deeper layers of structural inequality.

⚠️Cognitive Traps That Prevent Understanding the Problem

Discussions about algorithmic fairness often get stuck in cognitive traps that block understanding of the fundamental nature of the problem. Recognizing these traps is a critical condition for productive discussion. Learn more in the Thinking Tools section.

�� Trap 1: The Illusion of a Technical Solution to a Normative Problem

A common misconception: a sufficiently sophisticated algorithm or complete dataset will solve the problem of universal fairness. This is a category error—an attempt to solve a normative question (which definition of fairness is correct?) through technical means (a better algorithm). Impossibility theorems (S001) demonstrate: the problem isn't code quality, but the incompatibility of the definitions themselves.

The trap is dangerous because it creates a false sense of progress. Companies invest in "improving fairness" without acknowledging they're choosing between incompatible criteria. This choice is disguised as technical optimization, avoiding the normative question: whose fairness are we prioritizing and why?

Technical optimization cannot replace normative decision-making. An algorithm cannot be fair—only the choice we embed in it can be fair.

��️ Trap 2: The False Dichotomy of "Fairness vs Accuracy"

The discussion is often framed as a tradeoff: fairness requires sacrificing accuracy. This is a false dichotomy that obscures the real problem. The tradeoff isn't between fairness and accuracy, but between different definitions of fairness (S002).

A system can be maximally accurate (minimum overall error) and satisfy one fairness criterion while violating another. Framing it as "fairness vs accuracy" allows avoiding the difficult conversation: whose interests do we prioritize?

  1. A system can be calibrated (predictions match reality) while violating error equality between groups
  2. A system can have equal errors between groups while being uncalibrated for minorities
  3. A system can minimize overall error while maximizing error variance between groups

⚠️ Trap 3: Naturalizing Base Rates

When we see differences in base rates between groups (different recidivism rates, different incomes), there's a cognitive tendency to naturalize them—perceive them as natural, inevitable, reflecting real differences. This ignores that base rates are often the result of historical discrimination and systemic barriers.

Naturalization leads to the conclusion that calibration is the only reasonable criterion: the algorithm should accurately predict reality, whatever it may be. This perpetuates injustices because "reality" itself is a product of unjust systems (S003).

Naturalization
Cognitive error: perceiving a social/historical fact as a natural phenomenon. Example: "Group A has a higher recidivism rate—therefore, the algorithm should reflect this."
Critical Distinction
Descriptive fact (base rates differ) ≠ normative conclusion (algorithms should reproduce these differences). The first is an observation, the second is a political choice.
Developer's Trap
Calibration appears "objective" and "neutral," but it's a mask for a choice: reproduce historical injustices or correct them.

�� Trap 4: Conflating Levels of Analysis

Arguments often jump between the individual level (is the decision fair for a specific person?) and the group level (is the distribution between groups fair?). These levels have different fairness criteria, and conflating them creates an illusion of contradiction where none exists.

A system can be fair at the individual level (each decision logically follows from the data) and unfair at the group level (groups receive different outcomes). Or vice versa: fair at the group level (equal proportions) and unfair at the individual level (ignores relevant differences). Critical thinking requires explicitly stating which level we're discussing fairness at (S004).

�� Trap 5: Seeking the "Right" Criterion Instead of Acknowledging Choice

The deepest trap: the belief that there exists one "correct" fairness criterion we simply haven't found yet. This leads to endless debates about which criterion is better, instead of acknowledging that choosing a criterion is a political decision, not a technical discovery.

Different fairness criteria reflect different values: equality of opportunity, equality of outcomes, respect for autonomy, harm minimization. There's no mathematical way to choose between them. Acknowledging this isn't defeat, but the beginning of an honest conversation: who makes the decision, based on what values, and who bears the consequences (S005).

Seeking an "objective" fairness criterion is an attempt to avoid responsibility for choice. The choice always exists. The only question is who makes it and whether they acknowledge it.

��️ Verification Protocol: How to Assess AI System Fairness in Seven Steps

When an organization implements an AI system for decision-making, conducting a fairness audit is critically important. This protocol is based on understanding impossibility theorems (S001) and helps identify hidden trade-offs.

✅Step 1: Identify Protected Groups and Baseline Metrics

Determine which groups are affected by the system's decisions (race, gender, age, socioeconomic status). Measure baseline metrics for the target variable in each group. Learn more in the Karma and Reincarnation section.

In a credit scoring system: what is the actual default rate in each group? In medical diagnostics: what is the disease prevalence? If baseline metrics differ, impossibility theorems apply (S002), and the system cannot simultaneously satisfy all criteria.

  1. Identify demographic groups relevant to the context
  2. Collect data on actual outcomes in each group
  3. Calculate baseline rates (prevalence rate)
  4. Document data source and collection period

✅Step 2: Choose Fairness Criteria and Explicitly Name Trade-offs

There is no universal definition of fairness (S001). Select 2–3 criteria relevant to your context: demographic parity, equalized odds, calibration, predictive parity.

Each choice is a political decision, not a technical one. Document why you chose these specific criteria and which alternatives you rejected.

Criterion What It Tests When to Apply
Demographic Parity Equal proportion of positive decisions across groups When there's no information about baseline differences
Equalized Odds Equal error rates across groups When baseline metrics differ
Calibration Probability of positive outcome is equal at the same score When decision interpretability is needed

✅Step 3: Measure Metrics and Identify Conflicts

Calculate the selected metrics for each group. Compare results: where does the system satisfy criteria, and where does it violate them?

If the system simultaneously satisfies demographic parity and equalized odds, that's a signal: either baseline metrics are identical (rare), or metrics are calculated incorrectly. Check your calculations.

✅Step 4: Assess the Cost of Trade-offs

Each criterion choice has a cost (S005). If you choose demographic parity, you sacrifice accuracy for one of the groups. If equalized odds — you allow different proportions of positive decisions.

Quantify this cost: by what percentage will accuracy drop? How many people will receive incorrect decisions? Who will be harmed more?

✅Step 5: Check Whether the System Hides Discrimination Through Proxy Variables

A system may be fair by explicit criteria but use indirect features (proxies) to reproduce discrimination. For example, zip code often correlates with race.

Analyze the features the model uses. Which ones might be proxies for protected characteristics? Remove or reinterpret such features.

✅Step 6: Audit for Cognitive Traps

People implementing the system often believe that mathematics is neutral. Check whether you've fallen into the trap of technological determinism: the belief that an algorithm is inherently fairer than humans.

Compare the system's decisions with human decisions on the same data. Where is the system better? Where is it worse? Why did you choose this particular system?

✅Step 7: Document and Re-audit

Fairness is not a one-time check. Systems degrade: data changes, groups shift, criteria become outdated. Re-audit the system every 6–12 months.

Document all decisions: which criteria you chose, why, what trade-offs you accepted, who is responsible. This creates accountability and helps avoid ethical errors when scaling.

AI system fairness is not a technical problem that can be solved once. It's an ongoing process of negotiation between mathematics, politics, and organizational values. The protocol helps make these negotiations visible and honest.
⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The position on the mathematical impossibility of simultaneously satisfying all fairness criteria has weak points. Here are the main objections that should be considered when evaluating its practical applicability.

Overestimation of Mathematical Impossibility as an Absolute Barrier

Impossibility theorems operate only under strict assumptions — deterministic decisions, fixed groups, static conditions. Probabilistic approaches, context-dependent fairness definitions, and dynamic systems can circumvent some of these limitations. The claim about the insurmountability of the problem may be an exaggeration.

Insufficient Attention to Practical Compromises

In practice, organizations often find acceptable compromises between fairness criteria. Hybrid approaches and multi-criteria optimization, while not satisfying all criteria perfectly, achieve a balance that satisfies most stakeholders. The position may appear overly pessimistic regarding real-world possibilities.

Limited Evidence Base

Sources include systematic reviews of methodology and cognitive analysis, but direct empirical data on the consequences of choosing different fairness metrics is insufficient. Mathematical results are extrapolated to practical situations without enough case studies demonstrating how these trade-offs manifest in real systems.

Ignoring the Evolution of Fairness Definitions

The article captures the current state of impossibility theorems but does not account for the fact that the understanding of fairness itself is evolving. New interdisciplinary approaches (philosophy, law, sociology + mathematics) may offer definitions that better reflect human values and circumvent current mathematical impasses. The position risks becoming outdated as theory develops.

Risk of Action Paralysis

Emphasizing the impossibility of perfect fairness may unintentionally justify inaction: if perfection is unattainable, why try to improve? The focus on mathematical limitations distracts from practical steps to reduce discrimination, even if they don't eliminate it completely. Intellectual purism can be counterproductive to real improvements.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

Algorithmic fairness is a set of mathematical criteria that define how an AI system should treat different groups of people. Multiple incompatible definitions exist: demographic parity requires positive decisions to be distributed proportionally across groups; equal opportunity requires equal true positive rates; predictive parity requires equal prediction accuracy. It's mathematically proven that a system cannot simultaneously satisfy all these criteria if base rates differ between groups.
Because different definitions of fairness are mathematically incompatible with each other. Impossibility theorems prove that if base rates (e.g., frequency of positive outcomes) differ between groups, an algorithm cannot simultaneously ensure demographic parity, equal opportunity, and calibration. This isn't a technology flaw—it's a fundamental mathematical constraint, analogous to Arrow's impossibility theorem in social choice theory. Choosing one fairness criterion automatically violates others.
Demographic parity requires equal rates of positive decisions for all groups, while equal opportunity requires equal rates of true positives among qualified candidates. Demographic parity means if an algorithm approves 20% of applications, it must approve 20% in each demographic group. Equal opportunity means that among truly qualified candidates, the algorithm must approve the same percentage regardless of group. If baseline qualification differs between groups, these two criteria become mathematically incompatible: satisfying one automatically violates the other.
Systematic reviews show that fairness methods are effective for specific chosen metrics but create trade-offs with other metrics and overall accuracy. Meta-analyses confirm that post-processing techniques and algorithm modifications can improve a selected fairness criterion, but this improvement comes at the cost of degrading other criteria or reducing overall predictive accuracy (S004, S008). Systematic reviews also reveal methodological issues: lack of standardized definitions, difficulty comparing results across studies, insufficient attention to application context (S011).
The connection is indirect but important: cognitive task analysis (CTA) reveals how experts make decisions, which is critical for understanding which fairness criteria are relevant. Research shows CTA significantly improves learning of procedural knowledge and technical skills (S012). In the context of algorithmic fairness, this means that before automating decisions, we must deeply understand experts' cognitive processes: which factors they consider relevant, how they balance competing criteria, what tacit knowledge they use. Without this understanding, an algorithm may optimize the wrong fairness metrics.
Yes, living systematic reviews and prospective meta-analyses are ideal for the rapidly evolving field of algorithmic fairness. The ALL-IN meta-analysis methodology allows real-time updates as new data emerges, maintaining statistical validity without requiring pre-specification of analysis timepoints (S004). This is especially important for algorithmic fairness, where new methods and criteria appear constantly. Living reviews can track the effectiveness of different approaches, identify emerging patterns, and provide current recommendations without the delays characteristic of traditional systematic reviews.
Three main types: individual fairness, group fairness, and causal fairness. Individual fairness requires similar individuals to receive similar decisions. Group fairness (includes demographic parity, equal opportunity, predictive parity) requires statistical equality between demographic groups across various metrics. Causal fairness requires that protected attributes (race, gender) have no causal influence on decisions. Each type is operationalized through different mathematical metrics, and these metrics are often incompatible with each other even within the same type.
Because they use one specific definition of fairness without disclosing trade-offs with other definitions. Companies typically choose the metric that looks most favorable for their system and optimize for it. For example, a system may demonstrate demographic parity (equal approval rates) while having different prediction accuracy for different groups (violating calibration). Public claims of "fairness" exploit a cognitive bias: people assume fairness is a single, consistent property, when mathematically it's a set of mutually exclusive criteria. Lack of transparency in choosing a specific metric is a form of ethics washing.
It's a mathematical proof that certain combinations of fairness criteria cannot be satisfied simultaneously. The most well-known theorem (Chouldechova, 2017; Kleinberg et al., 2017) proves: if base rates differ between groups, a system cannot simultaneously ensure calibration (predictive parity), equal false positive rates, and equal false negative rates. This isn't an empirical observation but a rigorous mathematical proof, analogous to proving the impossibility of trisecting an angle with compass and straightedge. The theorem reveals fundamental boundaries of what's technically achievable.
The choice depends on application context, stakeholder values, and potential consequences of different error types. No universally "correct" definition exists. For medical diagnosis, equal false negative rates may be critical (to avoid missing disease in any group). For credit scoring, calibration may be important (so predicted default probability matches reality for all groups). For hiring, equal opportunity may be relevant (so qualified candidates have equal chances regardless of group). The choice requires explicit ethical discussion involving affected communities, ethics experts, and technical specialists. The key requirement is transparency about the choice and its justification.
The main biases are: the illusion of a single solution, technological solutionism, and moral panic around AI. The illusion of a single solution makes people believe that "fairness" is one clear property that can simply be "added" to an algorithm. Technological solutionism creates the expectation that any social problem can be solved with better code. Moral panic around AI leads to demands for "completely fair" systems without understanding mathematical constraints. The Dunning-Kruger effect manifests in people with superficial understanding of the topic being most confident that simple solutions exist. Understanding mathematical impossibility requires cognitive effort and willingness to accept fundamental uncertainty.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
// SOURCES
[01] Inherent Trade-Offs in Algorithmic Fairness[02] Algorithmic Decision Making and the Cost of Fairness[03] Algorithmic Fairness in Mortgage Lending: from Absolute Conditions to Relational Trade-offs[04] Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics[05] The price of fairness - A framework to explore trade-offs in algorithmic fairness[06] Algorithmic Fairness in Mortgage Lending: From Absolute Conditions to Relational Trade-offs[07] Algorithmic Fairness in Mortgage Lending: From Absolute Conditions to Relational Trade-Offs[08] Disability, fairness, and algorithmic bias in AI recruitment

💬Comments(0)

💭

No comments yet