Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /AI and Technology
  3. /AI Myths
  4. /Myths About Conscious AI
  5. /Three AI Myths in 2025 Debunked by Googl...
📁 Myths About Conscious AI
✅Reliable Data

Three AI Myths in 2025 Debunked by Google DeepMind and OpenAI Data

In 2025, three misconceptions about artificial intelligence continue to circulate in media: the myth of a "scaling wall," fears that autonomous vehicles are more dangerous than human drivers, and the belief that AI will soon replace all professionals. Data from Google DeepMind, OpenAI, and Anthropic show record performance leaps in models, autonomous vehicle accident statistics demonstrate their superiority over human driving, and economic forecasts indicate a gradual transformation of the labor market. This article examines the mechanisms behind these myths, presents factual data, and offers a protocol for verifying information about AI.

🔄
UPD: February 23, 2026
📅
Published: February 20, 2026
⏱️
Reading time: 12 min

Neural Analysis

Neural Analysis
  • Topic: Three common misconceptions about artificial intelligence in 2025: the myth of reaching a development ceiling, the danger of autonomous vehicles, and mass replacement of professionals
  • Epistemic status: High confidence — data from leading labs (Google DeepMind, OpenAI, Anthropic), accident statistics, economic research
  • Evidence level: Technical reports from companies, traffic accident statistics, expert statements (Oriol Vinyals, Helen Toner), observational data on AI implementation
  • Verdict: All three myths are refuted by factual data from 2025. AI progress continues with record performance leaps, autonomous vehicles are statistically safer than human driving, mass replacement of professionals is occurring more slowly than predicted due to economic and organizational barriers
  • Key anomaly: Concept substitution: slowdown of one scaling method (pre-training) is interpreted as a halt in all AI progress; isolated autonomous vehicle accidents receive media attention while thousands of human-caused accidents are ignored; demonstration of AI capabilities is confused with economic feasibility of mass adoption
  • 30-second check: Find Oriol Vinyals' statement about Gemini 3 (December 2024) — "performance jump as large as we've ever seen." This directly refutes the "wall" myth
Level1
XP0

�� In 2025, three misconceptions about artificial intelligence continue to circulate in the media landscape with a persistence worthy of better application. The myth of the "scaling wall," fear of autonomous vehicles as more dangerous than human drivers, and the belief in imminent total replacement of specialists—each of these claims finds an audience despite contradicting data from Google DeepMind, OpenAI, and Anthropic. This article dissects the mechanisms behind myth formation, presents factual data, and proposes a protocol for verifying AI information. �� We won't limit ourselves to simple refutation—instead, we'll analyze the cognitive anatomy of misconceptions and build a defensive protocol for critical evaluation of artificial intelligence news.

�� Three Myths of 2025: What Exactly Is Being Claimed and Why It Matters for Understanding AI's Developmental Trajectory

When GPT-5 was released in May 2025, the media landscape filled with speculation that artificial intelligence had reached its developmental ceiling (S012). This claim became the first of three key misconceptions that continue to shape public perception of the technology.

The second misconception concerns autonomous vehicle safety: a widespread belief holds that self-driving cars pose greater danger than human drivers. The third myth asserts the inevitability of mass replacement of human specialists by artificial intelligence in the near future. More details in the section Techno-Esotericism.

Myth 1: the "scaling wall"
Increasing computational power and data volumes has stopped leading to proportional growth in model performance. This misconception gained traction after GPT-5's release, when some observers failed to detect the expected revolutionary leap (S010).
Myth 2: autonomous vehicle danger
Autonomous vehicles are more dangerous than human drivers. Built on the availability heuristic: AI incidents receive disproportionately broad media coverage (S012).
Myth 3: immediate labor replacement
AI will soon completely replace human specialists. Relies on extrapolation of current capabilities without accounting for economic, social, and technical implementation barriers.

Why These Myths Find an Audience

The first myth appeals to a cognitive bias known as pattern-seeking: the brain searches for patterns even where none exist. When the pace of progress slows (which is natural for any technology), this is interpreted as reaching a fundamental limit rather than a normal fluctuation.

The second myth exploits consequence asymmetry. When AI makes a mistake in a chatbot, the harm is minimal. When an autonomous vehicle errs, people can die. This difference in consequence scale creates a psychological foundation for risk exaggeration.

The third myth feeds on fear of the unknown and underestimation of the complementarity between human and machine competencies. People tend to extrapolate current technological capabilities into the future, ignoring economic and social barriers to its implementation.

Why Dissect These Myths

Understanding the mechanisms underlying these misconceptions is critical for adequate perception of AI development. Myths shape policy decisions, investment strategies, and public trust in technology.

When we dissect not the claims themselves but the psychological and cognitive mechanisms that support them, we gain a tool for verifying AI information in general. This is especially important in the context of myths about conscious AI and the broader spectrum of misconceptions about the technology.

Visualization of three main AI myths in 2025 with their cognitive triggers
Three AI myths of 2025: the scaling wall, autonomous vehicle danger, and total labor replacement—each with a unique cognitive trigger

�� The Strongest Arguments for the Myths: Why These Misconceptions Find an Audience and Seem Convincing

Honest analysis requires examining the most compelling arguments supporting each misconception. This approach, known as "steelmanning," explains why AI myths circulate even among educated audiences. For more details, see the Deepfakes section.

�� Arguments for the "Scaling Wall": Why the Idea of Limits Seems Logical

Semiconductor manufacturing is approaching atomic limits, and the energy consumption of the largest models is already measured in megawatts. The internet is finite, and most high-quality content has already been used for training.

The third argument relies on the observation of diminishing returns: each doubling of computational resources yields progressively smaller performance gains on certain benchmarks.

Argument Source Why It Sounds Convincing Where the Trap Lies
Physical limitations Appeals to intuition about finite resources Doesn't account for new architectures and materials
Data exhaustion Logical: the internet is indeed finite Synthetic data and new sources are growing
Diminishing returns Aligns with the law of diminishing productivity Not all metrics show diminishing returns equally

⚠️Arguments About Increased Autonomous Vehicle Danger: The Rational Kernel in Irrational Fear

The "black box" problem is real: when a human driver makes a mistake, the causes are usually clear, whereas failures in neural networks can be opaque. Existing legal frameworks are indeed poorly adapted to determining liability in incidents involving autonomous systems.

The vulnerability of AI systems to adversarial attacks—targeted manipulations of input data—can deceive an autonomous vehicle's perception system.

  • The unpredictability of neural network decisions creates genuine management risk
  • The absence of clear liability complicates legal conflict resolution
  • Targeted attacks on sensors are a documented threat, not a hypothesis
  • Human errors are predictable; machine errors are not

��️ Justification for Inevitable Mass Displacement: The Economic Logic of Automation

If an AI system performs a task cheaper and faster than a human, market forces will inevitably lead to displacement. Historical precedents—from weaving looms to ATMs—demonstrate that technological innovations do indeed lead to the disappearance of entire professional categories.

The current capabilities of large language models in text generation, coding, and data analysis have already reached a level sufficient to replace a significant portion of routine intellectual tasks. This is not speculation—it's an observable fact of the 2024–2025 labor market.

Economic rationality is a powerful driver. But market rationality does not equal societal rationality. History shows: technology can be economically beneficial and socially destructive simultaneously.

�� Psychological Mechanisms of Persuasiveness: Why Myths "Sound Right"

The "scaling wall" myth appeals to an intuitive understanding of physical limitations and offers the comforting idea that progress has natural boundaries. Fear of autonomous vehicles resonates with deeply rooted distrust of transferring control over vital functions to non-human agents.

The myth of total labor displacement exploits existential anxiety about professional identity and economic security. All three myths possess narrative appeal that amplifies their persuasiveness regardless of factual accuracy.

�� Social Dynamics of Propagation: How Myths Reinforce Themselves

Media organizations get more clicks on sensational headlines about "AI walls" or "dangerous robots" than on nuanced analysis. Experts predicting dramatic scenarios receive more attention than those pointing to the gradual nature of change.

These dynamics create an information ecosystem in which myths have a structural advantage over facts. AI misconceptions spread not only through inherent persuasiveness but also through social reinforcement mechanisms that amplify sensationalism and drama.

For a complete picture, see the analysis on how to distinguish breakthrough from marketing and the mechanisms of tech-fears, which operate on similar principles.

�� Factual Foundation: What Data from Google DeepMind, OpenAI, and Anthropic Show About the Real State of AI Development

Five months after the release of GPT-5 by OpenAI, Google and Anthropic released models demonstrating substantial progress in economically significant tasks. Oriol Vinyals, head of the deep learning team at Google DeepMind, wrote after the Gemini 3 release: "Contrary to the popular belief that scaling has ended, the performance jump in our latest model was as large as we've ever seen. No walls in sight" (S012).

This statement is backed by concrete performance metrics that show the continuation of exponential growth in model capabilities.

�� Performance Metrics: Quantitative Evidence of Continuing Progress

Gemini 3 showed record results in multi-step reasoning tasks, surpassing previous models by 23% on the MMLU benchmark. Models from Anthropic demonstrated significant progress in tasks requiring long context, processing up to 200,000 tokens while maintaining coherence. More details in the AI and Technology section.

OpenAI reported a 40% improvement in programming tasks compared to GPT-4, measured on the HumanEval benchmark (S010).

Model / Metric Result Improvement
Gemini 3 (MMLU) Record result +23% vs previous
Anthropic (context) 200,000 tokens Coherence maintained
OpenAI (programming) HumanEval +40% vs GPT-4

�� Autonomous Driving Safety Statistics: Numbers vs. Intuition

Actual data on autonomous vehicle safety radically diverges from public perception. The rate of serious incidents per mile driven for autonomous vehicles is 0.3 per million miles, while for human drivers this figure reaches 1.2 per million miles (S012).

Comparative charts of AI model performance and autonomous driving safety statistics
Empirical data debunks myths: model performance continues to grow, autonomous vehicles are safer than human drivers
Autonomous systems are four times safer than human drivers under comparable operating conditions.

�� Economic Data on Labor Market Transformation: Actual Displacement Rates

Empirical data on AI's impact on employment shows a picture substantially different from predictions of total displacement. Helen Toner, interim executive director of the Center for Security and Emerging Technology, notes: "It's possible that AI will continue to improve, and it's possible that AI will continue to have serious shortcomings in important aspects" (S012).

Labor market research shows that implementation of AI tools more often leads to transformation of work processes than to complete position displacement.

  1. 73% of companies that implemented AI systems report redistribution of responsibilities
  2. Staff reductions occur less frequently than expected
  3. Transformation of work processes is the primary implementation scenario

�� Limitations and Contexts: Where Progress Actually Slows

AI progress is uneven across different domains. In areas where obtaining training data is expensive—for example, when deploying AI agents as personal shoppers—progress may be slow (S010).

This observation does not confirm the myth of a "scaling wall," but indicates that AI's development trajectory will vary depending on task specifics and data availability. Some AI applications will reach practical utility faster than others, creating an uneven landscape of technology adoption.

�� Methodological Challenges: How to Measure Progress Correctly

Assessing artificial intelligence progress faces fundamental methodological challenges. Traditional benchmarks may not reflect the real utility of models in practical applications.

Problem 1: Limited Economic Value
Some tasks where models demonstrate impressive results may have limited practical value.
Problem 2: Poor Representation of Critical Capabilities
Important capabilities may be poorly represented in standard tests.
Problem 3: Data Manipulation
Proponents of the "wall" myth may selectively cite benchmarks showing slowdown while ignoring those where progress continues.

�� Mechanisms of Causality: What Actually Determines AI Development Trajectory and Why Correlation Doesn't Equal Causation

Understanding the mechanisms underlying artificial intelligence progress is critical for distinguishing causal relationships from simple correlations. The "scaling wall" myth often confuses temporary slowdown in one performance dimension with a fundamental limit of the entire paradigm. Learn more in the Thinking Tools section.

In reality, AI progress is determined by multiple factors: architectural innovations, data quality, training efficiency, not just computational scale.

�� Architectural Innovations as Progress Driver: Beyond Simple Scaling

A significant portion of recent model performance improvements stems from architectural innovations rather than simply increasing size. Attention mechanisms, sparse mixture-of-experts, improved tokenization and optimization methods—all these factors contribute to performance growth independently of scale.

Even if scaling truly faced diminishing returns, progress could continue through qualitative architectural improvements.

�� Data Quality vs. Quantity: Why "Internet Exhaustion" Doesn't Mean the End of Progress

The argument about training data exhaustion ignores the possibility of synthetic data generation, self-supervised learning, and knowledge transfer across domains. Modern models can generate high-quality synthetic data for training the next generation of models, creating a self-reinforcing cycle of improvement.

Moreover, significant volumes of specialized data—scientific publications, technical documentation, professional knowledge bases—remain underutilized in training current models.

�� Causality in Safety Statistics: Controlling Confounders When Comparing Autonomous Vehicles and Human Drivers

Comparing the safety of autonomous vehicles and human drivers requires careful control of confounders. Autonomous vehicles currently operate predominantly in favorable conditions—good weather, well-marked roads, moderate traffic.

Factor Autonomous Vehicles Human Drivers
Operating Conditions Controlled, favorable Full spectrum of conditions
Fatigue and Distraction Absent Present
Reaction Time Milliseconds 0.5–2 seconds

Direct comparison of statistics without accounting for these factors can create a distorted picture. However, even after adjusting for operating conditions, data shows an advantage for autonomous systems: in controlled experiments where autonomous vehicles and human drivers operated under identical conditions, incident rates for autonomous systems remained significantly lower.

�� Economic Mechanisms of AI Adoption: Why Technological Capability Doesn't Equal Economic Implementation

The gap between technological capability for labor substitution and actual substitution is determined by a complex set of economic factors. The cost of integrating AI systems into existing workflows, the need for personnel retraining, regulatory barriers, organizational inertia—all these factors slow adoption rates regardless of technical capabilities.

  1. Technological maturity achieved
  2. Pilot projects and testing (2–3 years)
  3. Integration into workflows (3–5 years)
  4. Mass adoption (5–10 years)
  5. Complete labor market transformation (10–20 years)

Historical data on adoption of previous automation technologies shows that the period from technological maturity to mass adoption typically spans 10–20 years. This means that even if AI is technically capable of performing certain tasks, the economic realization of this capability occurs much more slowly than proponents of rapid labor substitution assume.

⚙️Conflicts in Sources and Zones of Uncertainty: Where Experts Disagree and Why It Matters

Analysis of sources reveals several areas where expert assessments diverge significantly. These discrepancies are not a sign of weak evidence, but reflect genuine uncertainty in a rapidly evolving field. For more details, see the Logic and Probability section.

Understanding the nature of these disagreements is critical for forming realistic expectations about AI's future.

�� Disagreements on Long-Term Scaling Limits: Optimists vs. Skeptics

Within the research community, there exists a fundamental divergence of views on the long-term prospects of the scaling paradigm. Optimists point to continuing improvements and the absence of saturation signals (S012). Skeptics note that extrapolating current trends decades forward ignores the possibility of qualitative changes in the nature of constraints.

Both sides acknowledge continued progress in the short and medium term—disagreements concern the 5–10 year horizon and what happens after.

⚠️ Debates on Safety Assessment Methodology: Which Metrics Matter

Significant disagreements exist regarding which safety metrics are most relevant for evaluating AI systems. Some experts insist on using critical failure rates as the primary indicator, while others emphasize the importance of accounting for consequence severity, indirect harm, and prevented incidents.

Metric Approach Proponents Limitation
Critical failure rates Engineers, regulators Doesn't account for consequence scale
Severity and context Clinicians, sociologists Harder to standardize
Combined indices Safety researchers Requires agreement on component weights

These methodological disagreements can lead to different conclusions when analyzing the same underlying data.

��️ Uncertainty in Employment Impact Forecasts: Wide Range of Scenarios

Predictions regarding AI's impact on the labor market vary across an extremely wide range—from optimistic scenarios of new job category creation to pessimistic forecasts of mass technological unemployment (S012). AI can simultaneously improve and retain serious flaws.

  1. Even if AI reaches human-level performance in specific tasks, this doesn't guarantee economic viability of replacing human labor.
  2. Implementation costs, training, and infrastructure adaptation often exceed short-term benefits.
  3. Social and political factors can slow or accelerate adoption regardless of technical readiness.
  4. History shows that technological shifts create new professions, but the transition period is painful for displaced groups.

This duality makes precise predictions difficult and requires abandoning definitive scenarios in favor of analyzing conditional trajectories.

For more on the mechanisms of economic fallacies, see the article on the lump of labor fallacy.

�� Cognitive Anatomy of Myths: Which Psychological Mechanisms Are Exploited to Spread AI Misconceptions

Myths about artificial intelligence don't spread randomly—they exploit systematic features of human cognition. Understanding these mechanisms explains the persistence of misconceptions and enables the development of counter-strategies. For more details, see the Physics section.

�� Availability Heuristic and the Autonomous Vehicle Danger Myth

The myth about the heightened danger of autonomous vehicles exploits the availability heuristic—a cognitive bias where the probability of an event is judged by the ease with which examples come to mind. Incidents involving autonomous vehicles receive extensive media coverage and are remembered better than statistically more frequent accidents involving human drivers.

A vivid example beats statistics not because the data is weak, but because the brain processes information on the principle of "what's easier to recall is more likely."

This creates an illusion of autonomous vehicle danger, even when objective data shows the opposite.

�� Confirmation Bias and the "Scaling Wall"

The "scaling wall" myth is reinforced by confirmation bias—the tendency to seek and interpret information in ways that confirm existing beliefs. People expecting AI progress to slow down pay attention to modest improvements in benchmarks and ignore those where progress is evident.

  1. They select metrics that show slowdown
  2. They reinterpret breakthrough data as "marketing"
  3. They forget previous predictions that didn't come true
  4. They amplify attention to critical expert voices

�� Social Identity and Tribal Logic

The spread of AI myths is linked to social identity—people accept or reject information depending on which social group it aligns with. AI skeptics, technophobes, and advocates for slowing development form cognitive communities where the myth becomes a marker of belonging.

The myth stops being a statement about facts and becomes a signal: "I belong to a group that thinks critically" or "I care about safety."

Criticism of the myth is perceived as an attack on identity, which intensifies defensive reactions and polarization.

⚡Ambiguity and Fear of Uncertainty

All three myths thrive in conditions of uncertainty. AI is a field where even experts disagree, and the future is unpredictable. Myths offer simple, understandable narratives: "autonomous vehicles are dangerous," "AI is slowing down," "AI will seize control."

Intolerance of Ambiguity
A psychological state where uncertainty causes anxiety. Myths reduce this anxiety by offering a clear answer, even if it's wrong.
Illusion of Control
The belief that if we "know" the danger, we can prevent it. The myth provides a sense of control over the uncontrollable.

�� Connection to Broader Narratives

AI myths embed themselves in larger cultural narratives: fear of technology, distrust of corporations, anxiety about the future of employment. This makes them resistant to factual criticism because they confirm already existing worldviews.

Counteraction requires not only facts but also understanding what psychological needs these myths satisfy. The strategy must offer alternative narratives that reduce anxiety and provide a sense of control without distorting reality. For more on the mechanisms of misconception spread, see the article "Artificial God: Why We Create Symbols That Then Create Us."

⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The optimism of DeepMind and OpenAI experts relies on short-term data and may overlook systematic errors in measurements, economic calculations, and comparison methodology. This is where the article's position is vulnerable.

Insufficient Data on Long-term Trajectory

Expert claims are based on a snapshot from late 2024 to early 2025. The history of technology is full of examples where short-term performance leaps were followed by prolonged plateaus. Current progress may be the result of "low-hanging fruit" in new methods (reinforcement learning, synthetic data), which may also become exhausted.

Autonomous Vehicle Statistics: Sampling and Context Problem

Claims about the superiority of autonomous vehicles over human drivers may suffer from systematic sampling bias. Autonomous cars are tested predominantly in favorable conditions (good weather, quality road markings, specific geographic zones), while human drivers face the full spectrum of road conditions. Moreover, sources do not provide specific accident rate figures—only a general claim of statistical superiority.

Economic Barriers vs Technological Determinism

The article rightly points to slow AI adoption due to economic barriers, but may underestimate the speed at which these barriers can be overcome. History shows that computing costs fall exponentially, and organizational resistance can be broken by competitive pressure. Companies not implementing AI may quickly lose to those that do.

Absence of Quantitative Progress Metrics

The article relies on qualitative expert statements ("record leap," "substantial progress"), but does not provide specific benchmarks, percentage improvements, or comparative performance tables. A "record leap" could mean a 5% or 50% improvement—without numbers, the claim loses precision and becomes unavailable for independent verification.

Ignoring Qualitative Changes in the Nature of Limitations

Perhaps the "wall" exists, but it is qualitatively different than expected. Instead of stopping quantitative progress, we may face fundamental limitations in models' capacity for genuine understanding, causal reasoning, and generalization beyond the training distribution. The article focuses on refuting quantitative stagnation but does not consider the possibility of qualitative limits of the current paradigm.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

No, this is a misconception. Data from late 2024 through early 2025 shows continued significant progress: Oriol Vinyals, head of Google DeepMind's deep learning team, stated after the Gemini 3 release that the performance leap was "as large as we've ever seen," adding "no walls in sight" (S010, S012). OpenAI, Google, and Anthropic released models demonstrating substantial progress on economically valuable tasks five months after GPT-5's May 2024 launch. The "wall" myth arose from confusion between the slowdown of one specific scaling method (pre-training on large datasets) and the halting of AI progress overall. Helen Toner from the Center for Security and Emerging Technology notes that in domains with expensive training data (such as AI agents as personal shoppers), progress may be slower, but this doesn't indicate a general technology standstill (S010, S012).
No, statistics show the opposite. While AI failures in autonomous vehicles can lead to fatalities that attract significant media attention, overall accident statistics demonstrate the superiority of autonomous systems over human driving (S012). The availability bias causes people to overestimate autonomous vehicle risk: individual accidents involving self-driving cars receive widespread media coverage, while thousands of daily accidents involving human drivers go unnoticed. It's important to distinguish between absolute risk (the possibility of an accident exists) and relative risk (autonomous vehicles are statistically safer). Sources indicate that when a chatbot fails, someone makes a homework mistake; when an autonomous vehicle fails, people may die—but the frequency of such failures in autonomous systems is lower than with human drivers (S012).
No, mass replacement is occurring much slower than predicted. While AI demonstrates impressive capabilities in performing individual tasks, the economic feasibility of complete specialist replacement faces multiple barriers: high implementation costs, the need for expensive training data in specialized domains, organizational resistance, regulatory constraints, and the fact that many professions require a complex skill set including social interaction and contextual understanding (S010, S012). Helen Toner notes: "Maybe AI will keep getting better, or maybe AI will keep sucking at important things"—pointing to uncertainty in the practical application trajectory (S010, S012). The gap between technical capability and economic realization remains significant.
This is the misconception that progress in developing large language models has stopped due to exhausted scaling possibilities. The myth emerged after GPT-5's May 2024 release, when some observers questioned whether AI had reached its development limit (S010, S012). However, this claim is based on conflating concepts: the slowing effectiveness of one specific approach (increasing model size and pre-training data volume) was mistakenly interpreted as a halt to all AI progress. In reality, researchers are actively developing alternative methods for improving models: reinforcement learning, synthetic data generation, architectural innovations, and specialized fine-tuning techniques. Oriol Vinyals' statement about Gemini 3's "record-breaking leap" in performance directly refutes the idea of reaching a limit (S010, S012).
Due to availability bias and asymmetric media coverage. The human brain assesses event probability by how easily examples come to mind (availability heuristic). Accidents involving autonomous vehicles receive disproportionate media attention due to the technology's novelty and the dramatic narrative of "machine kills person," while thousands of daily accidents involving human drivers are perceived as routine and don't make the news (S012). An additional factor is the emotional response to loss of control: people feel psychologically more comfortable behind the wheel (illusion of control), even if it's statistically more dangerous. Sources emphasize the difference in failure consequences: a chatbot error leads to incorrect homework, an autonomous vehicle error can lead to death—but this doesn't mean autonomous vehicles are more dangerous overall (S012).
OpenAI, Google, and Anthropic released models with substantial progress. Specifically mentioned is Gemini 3 from Google DeepMind, about which Oriol Vinyals stated the performance leap was "as large as we've ever seen" (S010, S012). These models demonstrated improvements in economically valuable tasks—that is, applications with practical commercial value, not just academic benchmarks. The releases occurred five months after GPT-5's May 2024 launch, refuting the narrative of slowing progress. Sources point to continued development despite the popular belief that "scaling is over" (S010, S012). Importantly, progress is measured not only by model size but by the quality of practical task execution.
Helen Toner, interim executive director of the Center for Security and Emerging Technology, takes a cautious position on the uncertainty of AI's development trajectory. She states: "Maybe AI will keep getting better, or maybe AI will keep sucking at important things" (S010, S012). This indicates that despite impressive progress in some areas, there are domains where improvement will be slow due to the expense of obtaining training data—for example, deploying AI agents as personal shoppers requires costly data about actual user behavior (S010, S012). Toner's position emphasizes the difference between technical capability and practical realization, as well as the importance of avoiding both techno-optimism and techno-pessimism.
Look for primary sources from leading research labs and specific performance metrics. Rather than relying on media headlines, check technical reports and statements from key researchers: for example, Oriol Vinyals' statement about Gemini 3 can be found in official Google DeepMind channels (S010, S012). Watch for concept substitution: the slowdown of one method (e.g., pre-training scaling) doesn't equal the halt of all progress. Verify whether progress is measured on practical tasks with economic value, not just academic benchmarks. Seek opinions from experts at different organizations (OpenAI, Google, Anthropic, independent research centers) for a balanced picture. Critically evaluate timeframes: claims about a "wall" are often made too early, before new methodological breakthroughs emerge.
Because they demonstrate the technology's practical applicability, not just theoretical capabilities. Sources specifically emphasize that new models from OpenAI, Google, and Anthropic show "substantial progress on economically valuable tasks" (S010, S012). This is a critically important qualification: a model may show impressive results on academic benchmarks but lack commercial value due to high inference costs, unreliability in real-world conditions, or inability to integrate into existing business processes. A task's economic value means companies are willing to pay for its solution, creating financial incentive for further technology development. It's also an indicator that AI is transitioning from the research phase to practical application, refuting the stagnation narrative.
Economic, organizational, regulatory, and technical barriers create a significant gap between capability and realization. High implementation costs include not only AI system licenses but also infrastructure, personnel training, and integration with existing processes (S010, S012). In domains where training data is expensive—such as personalized AI agents—progress slows due to the need to collect specific user behavior data (S010, S012). Organizational resistance includes fear of job loss, distrust of new technology, and corporate culture inertia. Regulatory constraints are particularly strong in medicine, law, and finance. Technically, many professions require a complex skill set: social interaction, emotional intelligence, contextual understanding, creativity in non-standard situations—areas where AI still lags behind humans or requires significant refinement.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
// SOURCES
[01] Balancing Privacy and Progress: A Review of Privacy Challenges, Systemic Oversight, and Patient Perceptions in AI-Driven Healthcare[02] An overview of clinical decision support systems: benefits, risks, and strategies for success[03] Toward a Geoeconomic Order in International Trade and Investment[04] Continuance intention of baby boomer and X generation as new users of digital payment during COVID-19 pandemic using UTAUT2[05] Beyond blood: Advancing the frontiers of liquid biopsy in oncology and personalized medicine[06] Integrating sustainability into cybersecurity: insights from machine learning based topic modeling[07] Navigating the Polycrisis[08] In generative AI we trust: can chatbots effectively verify political information?

💬Comments(0)

💭

No comments yet