Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /AI and Technology
  3. /AI Ethics and Safety
  4. /AI Ethics
  5. /Physiognomy in the AI Era: How ResearchG...
📁 AI Ethics
❌Disproven / False

Physiognomy in the AI Era: How ResearchGate Became a Pseudoscience Dumping Ground, and Why It's More Dangerous Than It Seems

The search query "pdf physiognomy in the age of ai researchgate" exposes a critical problem: academic platforms are becoming channels for spreading pseudoscientific practices disguised as AI research. Physiognomy—a discredited pseudoscience claiming links between appearance and character—is being revived in facial recognition algorithms, masquerading as "objective data analysis." This article dissects the mechanism of this substitution, reveals the actual evidence level of such work, and provides a protocol for verifying the scientific validity of sources on ResearchGate and arXiv.

🔄
UPD: February 16, 2026
📅
Published: February 12, 2026
⏱️
Reading time: 14 min

Neural Analysis

Neural Analysis
  • Topic: Revival of physiognomy in AI systems and the data quality problem on academic platforms (ResearchGate, arXiv)
  • Epistemic status: High confidence regarding the discrediting of physiognomy; moderate confidence regarding the scale of the problem on platforms
  • Level of evidence: Critical source analysis, AI ethics reviews, arXiv preprints, absence of systematic reviews on physiognomy in AI
  • Verdict: Physiognomy has no scientific basis and was debunked in the 20th century. Its integration into AI systems is not "new science" but the reproduction of discriminatory practices through algorithms. ResearchGate and arXiv contain materials of varying quality; the absence of peer review makes them vulnerable to pseudoscience.
  • Key anomaly: Concept substitution: "machine learning on faces" ≠ "scientific proof of physiognomy." Correlation in data does not prove a causal relationship between facial features and personality.
  • Check in 30 sec: Find an article on ResearchGate → check for DOI and a journal with impact factor → if it's a preprint without peer review, evidence level ≤2
Level1
XP0
🖤
When an academic platform becomes a distribution channel for discredited pseudoscience, and machine learning algorithms begin resurrecting practices that science abandoned a century ago—this isn't just a content quality problem. It's a systemic failure in the scientific validation mechanism that threatens to turn the AI revolution into a reincarnation of the most toxic prejudices of the past. The query "pdf physiognomy in the age of ai researchgate" exposes precisely such a situation: physiognomy—a pseudoscience about the connection between appearance and character, used to justify racism and discrimination—is returning under the guise of "objective data analysis" through facial recognition systems.

📌Physiognomy and AI: What Users Are Actually Searching For and Why This Query Signals a Critical Problem in the Academic Ecosystem

The search query "pdf physiognomy in the age of ai researchgate" reflects an attempt to find academic work linking the ancient practice of physiognomy with modern artificial intelligence technologies. Physiognomy is a discredited belief system claiming that a person's facial features can reveal their character, intelligence, criminal tendencies, or moral qualities. More details in the Deepfake Detection section.

Historically, this practice was used to justify racial segregation, eugenics, and discrimination (S007). Today it's returning not as overt manifestos, but disguised as algorithmic objectivity.

⚠️ What Modern "Digital Physiognomy" Looks Like in the AI Context

In the machine learning era, physiognomy has gained new life through facial recognition algorithms that supposedly can determine personality characteristics, sexual orientation, political views, or criminal propensity based on photo analysis (S001).

These systems masquerade as "objective data analysis," using neural network terminology and statistical significance, but essentially reproduce the same discredited assumptions about the connection between appearance and internal qualities. Biometric recognition becomes a tool through which old prejudices gain new legitimate status.

Digital Physiognomy
The use of computer vision algorithms to infer personality or behavioral characteristics based on facial analysis. Dangerous because the technological veneer conceals the absence of causal relationships between appearance and internal qualities.
ResearchGate as a Distribution Vector
The platform creates an appearance of academic legitimacy for work that hasn't undergone rigorous peer review. A preprint uploaded there gains an aura of scientific credibility simply by being placed among millions of users.

🔎 Why ResearchGate Becomes a Channel for Spreading Pseudoscientific Practices

ResearchGate positions itself as a platform for sharing scientific work, but the absence of strict pre-publication peer review turns it into a dumping ground for unverified materials (S003). Any researcher can upload a preprint that will gain the appearance of academic legitimacy simply by being hosted on the platform.

This creates an illusion of scientific consensus around ideas that haven't undergone critical review. Readers see: many citations, high author h-index, attractive graphs — and assume the work passed a quality filter. In reality, there is no filter.

🧩 The Substitution Mechanism: How Pseudoscience Masquerades as AI Research

The key tactic is using technical jargon and data visualizations to create an impression of scientific rigor. A paper may contain classification accuracy graphs, descriptions of neural network architectures, references to datasets — while being based on fundamentally flawed assumptions about causal relationships (S006).

Sign of Legitimate Research Sign of Masquerade in AI-Physiognomy
Clearly formulated hypothesis about the mechanism of connection Assumes the algorithm will "find" the connection itself, without theoretical justification
Confounders controlled (age, lighting, camera angle) Uses "raw" dataset with multiple uncontrolled variables
Results reproducible on independent samples Testing only on one dataset or on a subsample of the same dataset
Limitations and alternative explanations discussed Results presented as definitive proof

The problem is compounded by the fact that generative AI systems can create plausible-looking research texts without a real empirical basis. An article can be written convincingly, contain references to real work (often distorting their meaning), and look like the result of serious research.

This isn't just a question of publication quality. AI ethics and safety directly depend on what assumptions are embedded in algorithms. If a system is trained on work reproducing historical prejudices, it will scale them.

Visualization of pseudoscientific publication flow through academic platforms
Diagram of how an academic platform becomes a channel for spreading pseudoscience: absence of entry barriers, illusion of legitimacy through association with genuine research, algorithmic amplification through recommendation systems

🧪Steelman Analysis: Seven Arguments Used to Justify "Scientific Physiognomy" in the AI Era — and Why They Seem Convincing

To understand why pseudoscientific work on physiognomy finds an audience in academic circles, we need to examine the strongest versions of proponents' arguments. This doesn't mean agreeing with these positions, but it allows us to identify the mechanism of persuasion and weak points in the logic. More details in the AI Myths section.

  1. "Correlations exist objectively — we're just measuring them". If an algorithm detects statistical correlations between facial features and characteristics in a dataset, this is supposedly an objective fact. The model "simply finds patterns" without bias. An appeal to positivism: if a correlation is reproducible, it deserves study.
  2. "Modern datasets and computational power make the analysis fundamentally different". Instead of subjective assessments by a few observers — millions of images and deep neural networks revealing subtle patterns. The scale of data supposedly overcomes past limitations.
  3. "Genetics and embryonic development link face and brain". The face and brain form from the same embryonic tissues, subject to the same genetic factors. Therefore, correlations between morphology and neurocognitive characteristics are possible. Sounds scientific and harder to refute without specialized knowledge.
  4. "Practical applications justify the research". Potential applications: improved security, early diagnosis of disorders, personalized education. A utilitarian argument: if technology can save lives, ethical objections are secondary.
  5. "Criticism is based on political correctness, not science". Opponents are motivated by ideology, afraid of "inconvenient truths" about biological differences. This frame turns critics into opponents of scientific progress.
  6. "People already use physiognomy intuitively — AI just formalizes it". People constantly judge others by appearance. If these judgments contain predictive power, algorithmization can make the process fairer by eliminating individual biases. A paradox: AI physiognomy positions itself as fighting discrimination.
  7. "Banning research won't stop development — better to regulate openly". Technology will develop in closed labs. Public research allows society to understand risks and form regulatory frameworks. A ban drives the problem underground.
Each of these arguments contains seeds of logic that make them convincing to people unfamiliar with the history of physiognomy or machine learning methodology. That's precisely why they work in academic environments.

The first four arguments appeal to technological optimism and positivist philosophy of science: if we can measure it, it must be real. The last three use social rhetoric — accusing critics of ideology, appealing to practical benefits, and pragmatism.

The problem is that these arguments ignore the fundamental difference between correlation in a dataset and causal relationship in reality. A dataset is not nature, but a social artifact reflecting historical biases, data collection methods, and variable selection. An algorithm doesn't "discover truth" — it reproduces the structure of the data it was trained on.

The connection between AI physiognomy and the return of phrenology shows that data scale and computational power don't solve methodological problems — they exacerbate them, giving false scientific credibility to results. The genetic argument sounds convincing but ignores that population-level correlation doesn't predict individual characteristics, and social factors (nutrition, stress, lifestyle) affect both face and behavior independently of each other.

The utilitarian argument about practical benefits contains a hidden premise: that the technology actually works. But if the foundation is unreliable, application amplifies harm. Accusing critics of political correctness is a classic technique that shifts discussion from methodology to ideology, avoiding specific scientific objections.

The argument about "formalizing intuition" is paradoxical: if people already discriminate based on appearance, algorithmization doesn't eliminate discrimination but scales it, giving it an appearance of objectivity. This amplifies harm rather than reducing it.

The last argument about open regulation makes sense, but only if research is methodologically sound. If it's not, publicity doesn't save it — it legitimizes pseudoscience. Regulation must start with the criterion: does this actually work? — not with the assumption that it works and attempts to control it.

Understanding these arguments is important not for refuting them (that's the task of following sections), but for diagnosis: why intelligent people accept them. The answer lies in cognitive traps these arguments exploit — and in how AI ethics and safety require critical analysis, not blind trust in technology.

🔬Evidence Base: What the Data Actually Says About the Link Between Appearance and Personality Traits — and Why Most "AI Physiognomy" Studies Are Methodologically Unsound

Critical analysis of the empirical evidence shows that the vast majority of studies claiming AI can determine personality traits from faces suffer from fundamental methodological problems that invalidate their conclusions. For more details, see the section on AI Errors and Biases.

🧪 Problem 1: Confusing Correlation with Causation in the Presence of Multiple Confounders

Even if an algorithm detects a statistical association between facial features and a particular characteristic in a dataset, this does not establish a causal relationship. Faces may correlate with socioeconomic status (through access to cosmetic procedures, nutrition, healthcare), which in turn correlates with education, opportunities, and subsequently with the measured characteristics.

The model "predicts" not internal qualities, but social context (S001).

📊 Problem 2: Datasets Reflect Existing Biases, Not Objective Reality

Machine learning systems are trained on data created by humans with biases. If in the training dataset people with certain facial features are more often labeled as "criminal" due to police or judicial bias, the model will learn to reproduce this bias rather than discover a genuine relationship.

This is not "objective measurement," but automated discrimination (S004).

For more on the mechanisms of such bias, see the analysis of biometric facial recognition.

🧬 Problem 3: Biological Arguments Are Not Supported by Genetic Research

While the face and brain do develop from related embryonic structures, genetic studies find no significant correlations between genes affecting facial morphology and genes associated with cognitive or personality traits. Shared developmental factors do not imply functional connections in the mature organism.

The embryological argument is a biologized version of a logical fallacy (S001).

⚠️ Problem 4: Self-Fulfilling Prophecy Effects Distort All Measurements

If people with certain facial features are systematically treated differently due to existing stereotypes (for example, perceived as less competent or more aggressive), this affects their life trajectories, opportunities, and psychological state. Any detected correlation may be the result of social influence rather than an innate connection.

Separating these effects in observational data is impossible.

🔎 Problem 5: Publication Bias and P-Hacking in AI Research

Studies that find no relationship between facial features and personality traits are published less frequently than studies with "positive" results. This creates a distorted picture in the literature. Moreover, with large datasets and numerous possible features, it's easy to find statistically significant correlations by chance.

Without pre-registration of hypotheses and rigorous correction for multiple comparisons, most "discoveries" are false positives (S002).

🧾 Problem 6: Lack of Reproducibility in Independent Samples

The critical test of any scientific claim is reproducibility on independent data. Most "AI physiognomy" studies fail this test: models trained on one dataset show sharp drops in accuracy on other samples, especially from different cultural contexts.

This indicates that models are learning specific artifacts of particular datasets rather than universal patterns (S001).

📌 What Systematic Reviews and Meta-Analyses Show

Systematic reviews of the literature on the relationship between facial characteristics and personality traits show that after controlling for methodological problems, effects either disappear or become so small as to have no practical significance. Effect sizes typically explain less than 5% of variation, making individual predictions meaningless even when group differences are statistically significant (S001).

The context of this problem is broader: see how modern algorithms repeat the mistakes of the 19th century and the principles of responsible AI development.

Network of confounders in research linking appearance and personality
Visualization of the complex network of confounders: socioeconomic status, cultural stereotypes, self-fulfilling prophecy effects, bias in data labeling — all these factors create spurious correlations that algorithms interpret as causal relationships

🧠The Mechanism of Delusion: Why Intelligent People Believe in "Scientific Physiognomy" — Cognitive Traps and Exploitation of Trust in Technology

Understanding the psychological mechanisms that make pseudoscientific claims about physiognomy convincing is critically important for developing effective counter-strategies. Learn more in the Statistics and Probability Theory section.

🧩 Trap 1: Representativeness Heuristic and Illusion of Validity

People tend to trust their intuitive judgments about others based on appearance because these judgments feel fast and confident. When technology supposedly confirms these intuitions, an illusion of validity emerges: "I always felt this was true, and now science has proven it."

In reality, both intuition and algorithm may reproduce the same cultural stereotypes without any real predictive power.

⚙️ Trap 2: Technological Determinism and Belief in "Machine Objectivity"

There's a widespread misconception that computers and algorithms are free from human biases because they "just process numbers." This ignores the fact that algorithms are created by humans, trained on data collected and labeled by humans, and optimized for goals defined by humans (S008).

Every stage of algorithm development — from data selection to defining success metrics — is infused with human values and biases. "Machine objectivity" is a myth that masks developer accountability.

🔁 Trap 3: Substituting Explanation with Description Through Mathematical Formalization

When correlation is expressed through equations, graphs, and statistical metrics, it acquires the appearance of explanation. "The model achieved 73% accuracy in predicting X from facial features" sounds like a scientific discovery (S002).

In reality, this is merely a description of patterns in a specific dataset without understanding causal mechanisms. Mathematical form creates an illusion of deep understanding.

  1. Correlation in data ≠ causal relationship
  2. High accuracy on training set ≠ validity on new data
  3. Statistical pattern ≠ biological mechanism

🧬 Trap 4: Biologization of Social Phenomena as Defense Against Cognitive Dissonance

Acknowledging that social inequalities result from historical and structural factors requires accepting collective responsibility and the need for systemic change. Biological explanations (including physiognomy) remove this responsibility: "It's just nature, there's nothing we can do" (S001).

This is psychologically more comfortable, especially for those who benefit from the existing order. Biologism becomes a tool for defending against cognitive dissonance, not a result of scientific analysis.

Learn more about how AI physiognomy repeats the mistakes of the 19th century, and why AI ethics requires a critical approach to such systems.

⚠️Conflicts in Sources and Zones of Uncertainty: Where Even Physiognomy Critics Disagree — and What This Means for Practice

Even among researchers criticizing physiognomic AI systems, there are disagreements on key issues. This isn't a weakness of the critique — it's a sign that the problem is more complex than it appears at first glance. More details in the section Cognitive Biases.

🔬 Disagreement 1: Do Any Valid Correlations Between Face and Personality Exist at All

One position: any correlations are artifacts of methodological problems and social confounders. The second: very weak but real connections are possible through hormonal influence on development or self-presentation effects (people with certain traits develop certain interaction styles in response to others' reactions).

This debate defines the boundaries of permissible research (S001). If correlations exist, even weak ones, the question becomes not "whether to study" but "how to study without harm."

📊 Disagreement 2: Is the Problem in the Idea Itself or in Current Implementation

First position: physiognomy is fundamentally flawed, methodological improvements won't save it. Second: current systems are poor due to data and model deficiencies, but theoretically more sophisticated approaches are possible.

This disagreement directly affects regulatory strategy: complete ban or strict methodological standards (S004). The choice between them is not technical but political.

🧾 Disagreement 3: The Role of Academic Platforms in Spreading Problematic Research

One side: platforms like ResearchGate should implement strict moderation and remove pseudoscientific content. The other fears this would create censorship mechanisms that could be used against legitimate but controversial research.

The balance between openness and quality remains an unsolved problem (S003). There's no universal algorithm that can distinguish "inconvenient truth" from "convenient lie."

⚙️ Uncertainty: How to Assess Risks with Rapidly Developing Technologies

Current physiognomic systems are ineffective. But it's unclear whether this will remain true with the emergence of fundamentally new approaches — integration of genetic data, longitudinal studies, neuroimaging.

The precautionary principle suggests restrictions even under uncertainty. But this conflicts with the principle of research freedom (S008).

What This Means for Practice
A physiognomy critic cannot simply say "this is false." They need to specify which mechanisms are flawed, under what conditions they might be valid, and why current evidence is insufficient. This requires more work, but it's the only way to be convincing.
Why Disagreements Don't Weaken Critics' Position
The presence of debates within the critical community shows it's not ideological. Ideologues don't argue — they declare. Researchers argue because they seek truth, not victory.

For journalists, regulators, and researchers, this means: demand not agreement, but transparency. What specific assumptions does the author make? Where might they be wrong? What data could refute them?

🛡️Verification Protocol: Seven Questions That Will Expose Pseudoscientific "AI Physiognomy" Work in Three Minutes — A Checklist for Researchers, Journalists, and Regulators

A practical tool for rapid assessment of scientific validity in works claiming AI can determine personality characteristics from appearance. Learn more in the Epistemology Basics section.

✅ Question 1: Has the work undergone independent peer review in a journal with an impact factor above 3.0?

Preprints on ResearchGate or arXiv do not undergo rigorous review. Publication in a peer-reviewed journal doesn't guarantee quality, but its absence is a red flag.

Check the journal in Scopus or Web of Science databases. If the work exists only as a preprint for more than a year, that's suspicious (S003).

✅ Question 2: Were the hypothesis and analysis plan registered before data collection?

Preregistration on platforms like Open Science Framework protects against p-hacking and HARKing (hypothesizing after results are known).

If authors cannot provide a link to preregistration, results may be the product of data fitting (S002).

✅ Question 3: Has the model been tested on an independent sample from a different cultural context?

Validation only on a portion of the same dataset (train-test split) is insufficient. Testing on completely independent data collected in another country, at another time, by other researchers is required.

Without such validation, results may be specific to the particular dataset (S001).

⛔ Question 4: Are obvious confounders (age, gender, race, SES) controlled for?

If a model "predicts" a characteristic but authors haven't shown that the prediction holds after controlling for sociodemographic variables, the result may be entirely explained by these confounders.

Demand tables with multivariate analysis results (S001).

⛔ Question 5: Are data and code disclosed for independent verification?

Reproducibility is the foundation of science. If authors don't provide access to data (or at least synthetic data with the same properties) and analysis code, results cannot be verified.

References to "confidentiality" or "trade secrets" in academic work are unacceptable (S002).

⛔ Question 6: Are ethical risks and potential for misuse discussed?

Legitimate work in a sensitive area must include detailed discussion of potential risks: how results could be used for discrimination, profiling, privacy violations.

Absence of such discussion indicates that authors either don't recognize the consequences or are deliberately concealing them. Familiarize yourself with principles of responsible AI development.

⛔ Question 7: Does it use language characteristic of pseudoscience?

Red flags in text: absolute claims without qualifications ("AI accurately identifies criminals by face"), appeals to authority without evidence, absence of limitations discussion, accusations that critics are "biased" or practicing "political correctness".

Scientific language
"The model showed a correlation of r = 0.32 (p < 0.05) when controlling for age and gender, but the effect may be explained by cultural differences in emotional expression".
Pseudoscientific language
"Our AI revealed a hidden connection between facial features and personality that scientists have ignored out of political correctness".

If a work contains 4+ red flags from this list, it doesn't deserve trust. If it contains 2–3, critical reading and consultation with a methodologist are required. If red flags are absent, the work may be legitimate — but this doesn't guarantee its correctness.

Remember: absence of evidence of harm is not evidence of absence of harm. The burden of proof lies with authors claiming AI can determine personality characteristics from appearance. Review the analysis of physiognomic AI and threats to civil liberties.
⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The article raises important questions, but relies on assumptions worth examining. Here's where the logic may falter — and why this matters for honest analysis.

Overestimating the Scale of the Threat

Most facial recognition systems function as identification tools (comparison against a database), not as personality assessors. Explicit cases of physiognomy — systems evaluating "criminality" by face — remain exceptions, not the norm. The problem may be localized to specific jurisdictions (e.g., China), rather than representing a global epidemic.

Lack of Quantitative Data on ResearchGate

The platform criticism is not supported by systematic analysis: what percentage of "AI physiognomy" publications are actually pseudoscientific? Perhaps we're dealing with isolated cases inflated by media. Without concrete numbers, the conclusions remain speculative.

Conflating Legitimate Research with Pseudoscience

Scientifically grounded work on emotion recognition (Ekman, Facial Action Coding System) is not physiognomy. The article may create the impression that any facial analysis is pseudoscience, which would hinder the development of legitimate directions, such as helping people with autism recognize emotions.

Lack of Discussion of Technical Solutions

The article focuses on criticism but says little about how developers can technically prevent physiognomy: fairness constraints, adversarial debiasing, explainability tools. This creates a sense of hopelessness, though the tools already exist.

Risk of Censorship Through Moderation

Calls to "report pseudoscientific articles" can be used to suppress inconvenient but legitimate research. The boundary between a controversial hypothesis and pseudoscience is not always obvious, and excessive vigilance leads to self-censorship in academia. A nuanced approach to moderation on open platforms is needed.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

Physiognomy is a discredited practice of determining a person's character and abilities based on facial features, lacking any scientific evidence. Historically, it was used to justify racism and discrimination (for example, in Nazi Germany and colonial empires). Modern psychology and neuroscience have found no reliable connections between nose shape, eye structure, or other physical traits and personality characteristics. Any correlations found in data are explained by social stereotypes that algorithms reproduce, not by biological patterns.
Because machine learning finds patterns in data but doesn't understand causal relationships. If a training dataset contains social biases (for example, "people with certain appearances are more often denied credit"), the algorithm will capture the correlation and reproduce it. This creates an illusion of physiognomy's "objectivity": "the AI found the connection itself, so it must be real." In reality, the algorithm merely amplifies existing discrimination. The problem is compounded by developers often failing to verify the validity of initial hypotheses and not accounting for ethical risks.
Not automatically. ResearchGate is a platform for sharing scientific materials, but not all publications there have undergone peer review. The platform contains both quality articles from peer-reviewed journals and preprints, student papers, and even pseudoscientific texts. The key quality marker is the presence of a DOI (Digital Object Identifier) and publication in a journal with an impact factor. If an article is uploaded by the author as a "preprint" or "working version," its conclusions require additional verification. Always check the source, methodology, and presence of independent peer review.
arXiv is an open archive of preprints—scientific articles that haven't yet undergone peer review. Its purpose is to accelerate the exchange of ideas, especially in physics, mathematics, and computer science. The difference from journals: arXiv has no mandatory peer review; moderators only check formal topic compliance and absence of obvious spam. This means an arXiv article may contain errors, unverified hypotheses, or even pseudoscience. Preprints are often later published in journals after revision, but until then their conclusions are considered preliminary. Evidence level: ≤2 on the evidenceGrade scale.
Request documentation from developers: what features does the model use, what data was it trained on, was a bias audit conducted? If the system analyzes faces to assess personality, criminal tendencies, or trustworthiness—that's a red flag. Check whether the company has publications in peer-reviewed journals or only marketing claims. Use this checklist: (1) What specific facial features correlate with conclusions? (2) How do authors explain the causal relationship? (3) Was testing conducted across different ethnic groups? (4) Is there independent expert review? If there are no answers—the system likely reproduces physiognomy.
Because correlation doesn't equal causation. If an algorithm finds a connection between face shape and, for example, income level, this could be explained by dozens of factors: social background, access to education, employer stereotypes, even photo quality in the dataset. Physiognomy claims that facial features biologically determine character—this requires controlled experiments excluding all external variables. Such experiments don't exist. Moreover, research shows that people project their cultural expectations onto faces (for example, "square jaw = leadership"), and AI learns from these projections, not from reality.
Discrimination on a massive scale. Facial recognition systems with physiognomy elements are used for hiring, credit scoring, assessing crime suspects, and border control. If an algorithm believes certain facial features indicate "unreliability" or "criminal tendencies," this leads to systematic denial of rights to entire groups of people. Historical example: China's emotion recognition system for Uyghurs, which was used for repression. Risks are amplified by algorithm opacity: discrimination victims often don't know why they were denied and cannot challenge the decision.
Yes, but they address different tasks. Legitimate directions: identity verification (comparing a face to a database), emotion recognition through facial expressions (with caveats about cultural differences), medical diagnosis of genetic syndromes through facial dysmorphisms. Key difference: these systems don't claim that nose shape determines honesty or intelligence. They work with dynamic features (facial expression) or known medical markers. The problem is the boundary is blurred: companies may call physiognomy "microexpression analysis" or "behavioral biometrics" to avoid criticism.
Seven-point checklist: (1) Is there a DOI and journal name? (2) What's the journal's impact factor (check on Scimago)? (3) Are data collection methods and sample size specified? (4) Is there a control group and statistical analysis? (5) Do authors acknowledge study limitations? (6) Do other scientists cite the work (check Google Scholar)? (7) Do conclusions match the data or do authors make extrapolations? If the answer is "no" to 3+ points—the article is questionable. Additional red flag: sensational claims in the title ("AI learned to read minds from faces").
Report the problem. ResearchGate has a "Report" function, arXiv has moderator contact. Describe specific violations: lack of methodology, false claims, ethical risks. Share information in the scientific community (Twitter, specialized forums)—collective criticism is more effective. If the article is used to justify discriminatory practices (for example, in HR systems), you can contact regulators (in the EU—data protection authorities, in the US—relevant agencies). The main thing—don't stay silent: pseudoscience thrives in silence.
Several reasons: (1) Ignorance — some computer science researchers are unfamiliar with the history of physiognomy and ethical debates. (2) Commercial interest — companies commission such research to legitimize their products. (3) Publication pressure — academia values quantity of papers, and arXiv preprints are seen as a quick way to gain visibility. (4) Methodological blindness — focus on algorithm accuracy without asking "should we even be predicting this?" (5) Lack of interdisciplinarity — without psychologists, sociologists, and ethicists on the team, risks go unnoticed.
Key players: AI Now Institute (USA), Algorithm Watch (Germany), Access Now, Electronic Frontier Foundation (EFF). They publish reports, lobby for regulations, and audit systems. In academia — researchers like Kate Crawford, Timnit Gebru, Joy Buolamwini (Algorithmic Justice League). The EU has adopted the AI Act, banning social scoring systems and certain types of biometrics. In the US, regulation is weaker, but there are civil society initiatives. If you've experienced discrimination from an AI system, these organizations can help.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile

💬Comments(0)

💭

No comments yet