What we call "artificial intelligence" ā and why this definition already contains a perceptual trap
Before analyzing promises and risks, we must establish the boundaries of the concept. Artificial intelligence in contemporary discourse spans a spectrum from narrowly specialized machine learning algorithms to hypothetical artificial general intelligence (AGI), capable of solving any cognitive task at or above human level. More details in the section Techno-Esotericism.
Research (S003) defines AI as a technology that "will likely substantially change both marketing strategies and consumer behavior in the future," proposing a multidimensional framework for understanding AI's impact, including levels of intelligence, task types, and embeddedness in robotic systems.
š§© The problem of blurred boundaries: from calculator to consciousness
The key perceptual trap: the term "AI" is applied to systems with radically different capabilities. Narrow AI solves specific tasks ā facial recognition, recommendations, diagnostics. General AI (AGI) ā a hypothetical system capable of learning and problem-solving in any domain.
Conflating these categories in public discourse creates the illusion that advances in narrow AI automatically bring us closer to AGI, though an insurmountable chasm may lie between them.
Research (S002) emphasizes the need to distinguish between logicist, emergentist, and universalist approaches to AGI, noting that "we are trying to define what is necessary to create an Artificial Scientist." This uncertainty in the very foundations of the theory is the first sign that AI discourse is built on shaky ground.
ā ļø Semantic manipulation: how the word "intelligence" programs expectations
The very use of the word "intelligence" to describe algorithms creates anthropomorphic projection. When a system recommends a product based on purchase history, we tend to attribute understanding of our desires to it, though in reality it's statistical correlation.
- Anthropomorphism
- Attribution of human qualities (understanding, intention, desire) to non-human systems. Triggers automatically when a system demonstrates human-like behavior.
- Decision opacity
- Even if AI makes decisions optimal from the standpoint of a given function, their logic may be opaque or contradict human values.
Research (S008) warns: "Any non-human intelligence may construct solutions in such a way that any justification for their behavior lies beyond what humans are inclined to notice or understand." This means we may find ourselves in a situation where the system works, but its logic remains a black box to us.
š Analytical framework: from promises to mechanisms
For proper analysis, we must separate three levels of discourse:
- Technical capabilities ā what AI can actually do today.
- Extrapolated promises ā what, according to proponents, AI will be able to do in the future.
- Hidden risks ā what problems arise when scaling the technology.
Research (S003) notes that "AI may not deliver on all its promises due to issues related to data privacy, algorithmic biases, and ethics." This triad ā privacy, bias, ethics ā will be central to our analysis.
| Discourse level | What is claimed | Verification source |
|---|---|---|
| Technical | System performs task X with Y% accuracy | Experimental data, reproducibility |
| Prognostic | By year Z system will solve class of tasks W | Trends, extrapolation, but not facts |
| Risk-oriented | Scaling will create problems P, Q, R | Mechanisms, analogies, scenario modeling |
Without this separation, we conflate facts with assumptions and become captive to a narrative where every narrow AI achievement is interpreted as a step toward AGI, and every risk is either exaggerated or ignored.
Steel Man: Seven Most Compelling Arguments for AI's Revolutionary Potential
Before critiquing AI promises, it's necessary to present them in their strongest form. The "steel man" principle requires examining the best, not worst, versions of opposing arguments. More details in the section Deepfake Detection.
Seven key promises, backed by research, genuinely point to the transformative potential of the technology.
š¬ Argument 1: Real-Time Marketing Personalization at the Individual Preference Level
Research (S003) demonstrates that AI can analyze data and provide personalized recommendations in real timeāthe next product, optimal price, purchase context. Algorithms account for time of day, location, emotional state (inferred from behavioral patterns) and adapt offers with precision unattainable by humans.
Potential benefit: for businessāincreased conversion, for consumersāreduced cognitive load in decision-making.
š§Ŗ Argument 2: Revolution in Medical Diagnostics and Cancer Treatment
Source (S001) focuses on AI applications in oncology: machine learning algorithms analyze medical images, genomic data, and patient histories for early diagnosis and personalized treatment. Computer vision systems already surpass radiologists in detecting certain tumor types at early stages.
Promise: reduced mortality, shorter time to diagnosis, optimization of therapeutic protocols based on predictive models.
𧬠Argument 3: Autonomous Systems Will Free Humans from Dangerous and Monotonous Work
From self-driving cars to surgical robotsāautonomous systems promise to take on tasks involving life-threatening risks or requiring superhuman precision. Research (S005) emphasizes that the emergence of increasingly autonomous systems necessitates AI agents to handle environmental uncertainty through creativity.
AI doesn't simply execute programmed actions but adapts to unforeseen situationsācritical for deployment in dynamic environments.
š Argument 4: Accelerating Scientific Discovery Through Automated Hypothesis Generation and Experimentation
The "Artificial Scientist" concept (S002) envisions creating systems capable of formulating hypotheses, planning experiments, and interpreting results without human intervention. AI is already being used to discover new materials, drugs, and optimize chemical reactions, reducing the time from idea to prototype from years to months.
If this becomes reality, the pace of scientific progress could increase by orders of magnitude.
š§ Argument 5: Creative Problem-Solving Under Incomplete Information
Research (S005) defines creative problem-solving (CPS) as a subfield of AI focusing on methods for addressing non-standard or anomalous problems in autonomous systems. AI can find unconventional solutions in situations where traditional algorithms fail.
Application potentialāfrom crisis management to engineering innovation.
š Argument 6: Scalability and Accessibility of Expertise for Billions of People
Generative models like ChatGPT demonstrate the ability to provide consultation, education, and support at a level previously accessible only through expensive specialists. Research (S006) analyzes multidisciplinary perspectives on the capabilities and implications of generative conversational AI for research, practice, and policy.
Democratization of knowledge, lowering barriers to education and professional development on a global scale.
āļø Argument 7: Resource Optimization and Reduced Environmental Footprint Through Predictive Analytics
AI analyzes massive datasets to optimize energy consumption, logistics, agriculture, and industrial processes. Predictive models prevent equipment failures, reduce waste, and increase resource efficiency.
In the context of the climate crisis, this is not merely economic benefit but a potentially critical tool for sustainable development.
- Real-time marketing personalizationāincreased conversion through contextual recommendations
- Medical diagnosticsāearlier tumor detection, treatment optimization
- Autonomous systemsāliberation from dangerous and monotonous work
- Accelerated scientific discoveryāshortened cycle from idea to prototype
- Creative problem-solvingāunconventional solutions under uncertainty
- Scalable expertiseāaccess to knowledge for billions of people
- Resource optimizationāreduced environmental footprint through predictive analytics
Evidence Base: What Research Says About AI's Real Capabilities and Limitations
Moving from promises to facts, it's necessary to analyze what exactly is confirmed by empirical data and what remains speculation. Research demonstrates both impressive achievements and systemic limitations that rarely make headlines. More details in the AI and Technology section.
š Marketing Personalization: Effectiveness vs. Manipulation
Research (S003) confirms that AI is indeed capable of analyzing behavioral data and providing personalized recommendations in real time. However, the same work warns: personalization effectiveness depends on system opacity.
Once a user realizes their behavior is being predicted and directed by an algorithm, reactive resistance emerges. Personalization can amplify cognitive biases: if an algorithm optimizes for engagement, it shows content that confirms existing beliefs (echo chamber effect), even when this contradicts the user's long-term interests.
Algorithmic biases remain an unresolved problem at the system architecture level. Research (S003) notes the risks but offers no mechanisms for their elimination.
š§Ŗ AI in Medicine: Diagnostic Accuracy vs. Decision Opacity
AI application in oncology (S001) demonstrates impressive results in medical image analysis. The critical problem is the opacity of neural network decisions: a physician cannot explain why the algorithm classified a formation as malignant.
This creates legal and ethical risks. If the diagnosis proves incorrect, who bears responsibilityāthe developer, the institution, or the physician who trusted the system?
- Algorithms are trained on historical data containing systematic biases
- Insufficiently representative samples lead to errors for underrepresented groups
- Facial recognition algorithms show higher error rates for Black individuals
- Medical AI underestimates risks for women if trained predominantly on male data
𧬠Autonomous Systems: Adaptability vs. Unpredictability
Research (S005) emphasizes that autonomous systems must handle uncertainty through creativity. But this creates a new problem: how to guarantee the safety of a system that can generate unpredictable decisions?
Autonomous vehicles in moral dilemma situations are a classic example. An algorithm may make a decision that's optimal from the standpoint of minimizing casualties but unacceptable from the standpoint of human ethics.
| Parameter | Capability | Risk |
|---|---|---|
| Adaptability | System handles uncertainty | Decisions become unpredictable |
| Optimization | Minimizing casualties in critical situations | Conflict with human ethics |
| Explainability | System operates efficiently | We cannot explain its decisions |
As systems grow more complex, we may face situations where AI makes decisions we can neither predict nor explain after the fact. Research (S008) warns: non-human intelligence may construct decisions in ways where any justification lies beyond what humans are inclined to notice or understand.
š§¾ Generative Models: Democratization of Knowledge vs. Spread of Disinformation
Research (S006) analyzes the capabilities and risks of generative models like ChatGPT. On one hand, these systems provide access to information at an unprecedented level. On the other, they generate convincing-sounding but factually incorrect content (the "hallucination" phenomenon).
They can be used to create deepfakes, phishing attacks, and mass production of disinformation. The critical problem is the absence of verification mechanisms: users cannot easily distinguish correct answers from plausible fabrications, especially in areas without expertise.
- Hallucination
- Generation of convincing-sounding but factually incorrect content. Creates risk of eroding trust in information generally.
- Verification
- Absence of built-in mechanisms for checking accuracy. If any text can be AI-generated, how do we identify credible sources?
- Scalability of Disinformation
- Generative models enable industrial-scale disinformation production, exceeding human verification capabilities.
The connection between these limitations and broader cognitive security issues is revealed in the analysis of AI ethics and safety. Systematic biases in medical algorithms echo historical errors, as shown in research on AI physiognomy and the return of phrenology.
Mechanisms of Causality: Correlation, Confounders, and the Illusion of Understanding
One of the key problems with modern AI is the confusion between correlation and causality. Machine learning algorithms identify statistical patterns in data but don't understand cause-and-effect relationships. More details in the Sources and Evidence section.
This creates a risk of false conclusions and ineffective interventions, especially when decisions are made based on correlations without verifying the underlying mechanisms.
š The Confounder Problem: When Correlation Misleads
Classic example: an algorithm discovers that people who buy a certain product make repeat purchases more frequently. Marketers interpret this as proof of brand loyalty and invest in promotion.
However, the real cause may be a third variable (confounder)āfor example, high customer income, which correlates with both product purchase and overall purchase frequency. Promoting the product won't increase loyalty if the true cause is income.
| Observed Correlation | Assumed Cause | Actual Confounder | Result of Incorrect Inference |
|---|---|---|---|
| Product purchase ā repeat purchases | Brand loyalty | High income | Marketing investments yield no effect |
| Medication ā health improvement | Drug efficacy | Young age, healthy lifestyle | Prescription of ineffective treatment |
| Education ā high income | Knowledge and skills | Social capital, connections | Overestimation of formal education's role |
(S003) notes that AI can analyze data and provide recommendations but doesn't specify how systems distinguish between correlation and causality. This is a fundamental limitation: without experimental design (randomized controlled trials), it's impossible to establish causal relationships.
Most commercial AI systems work with observational data, where confounders remain invisible to the algorithm.
𧬠The Illusion of Understanding: Why AI Explanations Can Be False
Even when developers attempt to make AI "explainable" (explainable AI, XAI), explanations may be artifacts that don't reflect the system's actual logic. The LIME (Local Interpretable Model-agnostic Explanations) method creates a simplified model that approximates the behavior of a complex neural network in a local region.
But this approximation can be inaccurate, and the user gets an illusion of understanding that doesn't correspond to reality. The explanation looks logical, but it's the logic of the simplified model, not the system itself.
(S008) warns that non-human intelligence may construct solutions whose justification lies beyond human comprehension. As systems grow more complex, the gap between AI's actual logic and our explanations widens, creating a risk of catastrophic errors we won't be able to anticipate or correct.
This is especially dangerous in critical domains where decisions affect people's lives.
š§· Feedback Loops and Self-Reinforcing Cycles
A critical problem rarely discussed in public discourse: AI doesn't just analyze dataāit changes the environment, generating new data on which it will train in the future.
- Recommendation Systems
- Show content they believe the user will like. The user interacts with the content, generating new data that confirms the original assumption. Result: a self-reinforcing cycle that entrenches biases and limits diversity of experience.
- Marketing and Personalization
- (S003) shows that personalization may not expand but narrow consumer choice, showing them only what the algorithm considers relevant based on past behavior.
- Medical Diagnosis
- Algorithms will get better at diagnosing diseases already well-represented in the data, and worse at rare pathologies, creating systematic blindness.
These cycles create an illusion of quality improvement (the system becomes more accurate), but in reality the system adapts to its own errors, deepening them.
The connection to AI myths is obvious: the myth of "algorithm neutrality" ignores that each system decision changes the data it trains on, turning AI into an active participant in shaping reality rather than a passive observer.
Conflicts and Uncertainties: Where Sources Diverge and What It Means
Analysis of the sources reveals several areas where researchers have not reached consensus. These disagreements are critical for evaluating the reliability of AI promises. More details in the Thinking Tools section.
š§© Disagreement 1: Achievability of Artificial General Intelligence (AGI)
Research (S002) examines various approaches to creating AGIālogicist, emergent, and universalistābut does not conclude which is most promising or even feasible.
This points to fundamental uncertainty: we don't know whether AGI is possible in principle, and if so, on what timeline. Predictions range from "within 10 years" to "never."
Any long-term promises based on AGI remain speculative until the basic uncertainty about its achievability is resolved.
š³ļø Disagreement 2: Balance Between Personalization and Manipulation
Research (S003) acknowledges that personalization can cause user discomfort if they realize they're interacting with a bot, but offers no clear criteria for distinguishing ethical personalization from manipulation.
Where is the line between "helping with decision-making" and "directing behavior in the company's interest"? The lack of consensus means regulation lags behind technology.
| Aspect | Personalization (assistance) | Manipulation (control) |
|---|---|---|
| Goal | Matching user preferences | Changing behavior in company's interest |
| Transparency | User knows about the mechanism | Mechanism is hidden or obscured |
| Control | User can disable/modify | User cannot effectively counteract |
Companies de facto define ethical standards themselves, as regulatory frameworks are absent. Related issues are examined in AI ethics and safety.
š§¾ Disagreement 3: Accountability for Autonomous System Decisions
None of the reviewed studies offers a convincing solution to the accountability problem. If an autonomous vehicle makes an error, who is at faultāthe manufacturer, algorithm developer, company that provided training data, or owner?
- Manufacturer is responsible for system integration and testing
- Algorithm developer is responsible for model quality
- Data provider is responsible for representativeness and cleanliness of training set
- Owner is responsible for proper use and maintenance
- Regulator is responsible for establishing standards and oversight
Research (S005) emphasizes the need for creative problem-solving under uncertainty, but does not discuss how to distribute responsibility for unpredictable decisions. Current legislation is not prepared for this challenge.
Cognitive Anatomy of the Myth: Which Psychological Mechanisms Are Exploited in AI Discourse
AI promises work not because the technology is impressive ā but because they exploit predictable cognitive traps. Recognizing the mechanism means building a defense. More details in the Financial Pyramids and Scams section.
- Halo effect: one impressive achievement (ChatGPT writes code) transfers to the entire class of systems. Result: expectation of universality where there is narrow specialization.
- Appeal to authority: if a major company or famous scientist talks about revolution, critical thinking shuts down. Apologetics substitutes for analysis.
- Narrative of inevitability: "it will happen, it's only a matter of when" blocks the question "should this happen and under what conditions".
- Social proof: if everyone talks about AI as a miracle, dissonance with reality is suppressed by silence.
Each mechanism works independently, but together they create a cognitive shield through which facts cannot pass.
The AI myth is sustained not by evidence, but by psychological economy: it's easier to believe in revolution than to understand what the system actually does and doesn't do.
This is not manipulation in the classical sense ā it's the natural result of information asymmetry. Those who understand the technology often can't explain it simply. Those who explain simply often don't understand the details.
- Cognitive immunity
- The ability to notice when emotion substitutes for analysis, when authority substitutes for evidence, when narrative substitutes for fact. This is not skepticism ā it's mental hygiene.
Developing this immunity requires one thing: the habit of asking not "does this sound plausible", but "what exactly will happen, who will measure it, and how will I know it worked". AI myths crumble at first contact with this question.
