Skip to content
Navigation
šŸ Overview
Knowledge
šŸ”¬Scientific Foundation
🧠Critical Thinking
šŸ¤–AI and Technology
Debunking
šŸ”®Esotericism and Occultism
šŸ›Religions
🧪Pseudoscience
šŸ’ŠPseudomedicine
šŸ•µļøConspiracy Theories
Tools
🧠Cognitive Biases
āœ…Fact Checks
ā“Test Yourself
šŸ“„Articles
šŸ“šHubs
Account
šŸ“ˆStatistics
šŸ†Achievements
āš™ļøProfile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

Ā© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /AI and Technology
  3. /AI Ethics and Safety
  4. /AI Ethics
  5. /šŸ–¤ Artificial Intelligence: Promises of ...
šŸ“ AI Ethics
āš ļøAmbiguous / Hypothesis

šŸ–¤ Artificial Intelligence: Promises of the Future, Complexity of the Past, and the Legacy We Ignore

Artificial intelligence promises a revolution in marketing, science, and autonomous systems, but its future is clouded by ethical dilemmas, algorithmic biases, and privacy concerns. Research shows that AI can transform consumer behavior and business models, yet risks of manipulation and decision opacity remain critical. This article examines what lies behind AI's promises, which deception mechanisms operate in technology discourse, and offers a protocol for verifying claims about AI's "bright future."

šŸ”„
UPD: February 19, 2026
šŸ“…
Published: February 18, 2026
ā±ļø
Reading time: 13 min

Neural Analysis

Neural Analysis
  • Topic: Artificial Intelligence — analysis of promises, historical context, and ethical risks of the technology
  • Epistemic status: Moderate confidence — data from academic sources and practical case studies, but long-term effects of AI remain subject to debate
  • Evidence level: Review articles, conceptual studies, arXiv preprints, practical cases from marketing and oncology
  • Verdict: AI is indeed transforming industries, but claims of a "cloudless future" ignore systemic risks: algorithmic biases, privacy concerns, decision opacity, and potential behavioral manipulation. Promises require critical scrutiny.
  • Key anomaly: AI discourse often substitutes technical capabilities for ethical guarantees — the ability to personalize advertising does not imply the right to manipulate choice
  • Check in 30 sec: Ask: "Who controls the data this AI was trained on, and whose interests does it serve?"
Level1
XP0
šŸ–¤
Artificial intelligence promises us a future where marketing becomes personalized to the point of mind-reading, medicine defeats cancer, and autonomous systems free humanity from routine. But behind every promise lies a complex reality: algorithmic biases, ethical dilemmas, opaque decision-making, and manipulation risks that are often swept under the rug. šŸ‘ļø This article is neither a manifesto of techno-optimism nor a Luddite pamphlet, but an attempt to dissect the mechanisms that make us believe in the "bright future of AI" while ignoring its dark legacy. We explore what lies behind the promises, which cognitive traps are exploited in technology discourse, and offer a verification protocol to help distinguish real progress from marketing illusion.

šŸ“ŒWhat we call "artificial intelligence" — and why this definition already contains a perceptual trap

Before analyzing promises and risks, we must establish the boundaries of the concept. Artificial intelligence in contemporary discourse spans a spectrum from narrowly specialized machine learning algorithms to hypothetical artificial general intelligence (AGI), capable of solving any cognitive task at or above human level. More details in the section Techno-Esotericism.

Research (S003) defines AI as a technology that "will likely substantially change both marketing strategies and consumer behavior in the future," proposing a multidimensional framework for understanding AI's impact, including levels of intelligence, task types, and embeddedness in robotic systems.

🧩 The problem of blurred boundaries: from calculator to consciousness

The key perceptual trap: the term "AI" is applied to systems with radically different capabilities. Narrow AI solves specific tasks — facial recognition, recommendations, diagnostics. General AI (AGI) — a hypothetical system capable of learning and problem-solving in any domain.

Conflating these categories in public discourse creates the illusion that advances in narrow AI automatically bring us closer to AGI, though an insurmountable chasm may lie between them.

Research (S002) emphasizes the need to distinguish between logicist, emergentist, and universalist approaches to AGI, noting that "we are trying to define what is necessary to create an Artificial Scientist." This uncertainty in the very foundations of the theory is the first sign that AI discourse is built on shaky ground.

āš ļø Semantic manipulation: how the word "intelligence" programs expectations

The very use of the word "intelligence" to describe algorithms creates anthropomorphic projection. When a system recommends a product based on purchase history, we tend to attribute understanding of our desires to it, though in reality it's statistical correlation.

Anthropomorphism
Attribution of human qualities (understanding, intention, desire) to non-human systems. Triggers automatically when a system demonstrates human-like behavior.
Decision opacity
Even if AI makes decisions optimal from the standpoint of a given function, their logic may be opaque or contradict human values.

Research (S008) warns: "Any non-human intelligence may construct solutions in such a way that any justification for their behavior lies beyond what humans are inclined to notice or understand." This means we may find ourselves in a situation where the system works, but its logic remains a black box to us.

šŸ”Ž Analytical framework: from promises to mechanisms

For proper analysis, we must separate three levels of discourse:

  1. Technical capabilities — what AI can actually do today.
  2. Extrapolated promises — what, according to proponents, AI will be able to do in the future.
  3. Hidden risks — what problems arise when scaling the technology.

Research (S003) notes that "AI may not deliver on all its promises due to issues related to data privacy, algorithmic biases, and ethics." This triad — privacy, bias, ethics — will be central to our analysis.

Discourse level What is claimed Verification source
Technical System performs task X with Y% accuracy Experimental data, reproducibility
Prognostic By year Z system will solve class of tasks W Trends, extrapolation, but not facts
Risk-oriented Scaling will create problems P, Q, R Mechanisms, analogies, scenario modeling

Without this separation, we conflate facts with assumptions and become captive to a narrative where every narrow AI achievement is interpreted as a step toward AGI, and every risk is either exaggerated or ignored.

Visualization of the semantic trap in AI definition
Diagram illustrating the gap between technical capabilities of narrow AI, extrapolated promises of AGI, and hidden risks ignored in public discourse

🧱Steel Man: Seven Most Compelling Arguments for AI's Revolutionary Potential

Before critiquing AI promises, it's necessary to present them in their strongest form. The "steel man" principle requires examining the best, not worst, versions of opposing arguments. More details in the section Deepfake Detection.

Seven key promises, backed by research, genuinely point to the transformative potential of the technology.

šŸ”¬ Argument 1: Real-Time Marketing Personalization at the Individual Preference Level

Research (S003) demonstrates that AI can analyze data and provide personalized recommendations in real time—the next product, optimal price, purchase context. Algorithms account for time of day, location, emotional state (inferred from behavioral patterns) and adapt offers with precision unattainable by humans.

Potential benefit: for business—increased conversion, for consumers—reduced cognitive load in decision-making.

🧪 Argument 2: Revolution in Medical Diagnostics and Cancer Treatment

Source (S001) focuses on AI applications in oncology: machine learning algorithms analyze medical images, genomic data, and patient histories for early diagnosis and personalized treatment. Computer vision systems already surpass radiologists in detecting certain tumor types at early stages.

Promise: reduced mortality, shorter time to diagnosis, optimization of therapeutic protocols based on predictive models.

🧬 Argument 3: Autonomous Systems Will Free Humans from Dangerous and Monotonous Work

From self-driving cars to surgical robots—autonomous systems promise to take on tasks involving life-threatening risks or requiring superhuman precision. Research (S005) emphasizes that the emergence of increasingly autonomous systems necessitates AI agents to handle environmental uncertainty through creativity.

AI doesn't simply execute programmed actions but adapts to unforeseen situations—critical for deployment in dynamic environments.

šŸ“Š Argument 4: Accelerating Scientific Discovery Through Automated Hypothesis Generation and Experimentation

The "Artificial Scientist" concept (S002) envisions creating systems capable of formulating hypotheses, planning experiments, and interpreting results without human intervention. AI is already being used to discover new materials, drugs, and optimize chemical reactions, reducing the time from idea to prototype from years to months.

If this becomes reality, the pace of scientific progress could increase by orders of magnitude.

🧠 Argument 5: Creative Problem-Solving Under Incomplete Information

Research (S005) defines creative problem-solving (CPS) as a subfield of AI focusing on methods for addressing non-standard or anomalous problems in autonomous systems. AI can find unconventional solutions in situations where traditional algorithms fail.

Application potential—from crisis management to engineering innovation.

šŸ” Argument 6: Scalability and Accessibility of Expertise for Billions of People

Generative models like ChatGPT demonstrate the ability to provide consultation, education, and support at a level previously accessible only through expensive specialists. Research (S006) analyzes multidisciplinary perspectives on the capabilities and implications of generative conversational AI for research, practice, and policy.

Democratization of knowledge, lowering barriers to education and professional development on a global scale.

āš™ļø Argument 7: Resource Optimization and Reduced Environmental Footprint Through Predictive Analytics

AI analyzes massive datasets to optimize energy consumption, logistics, agriculture, and industrial processes. Predictive models prevent equipment failures, reduce waste, and increase resource efficiency.

In the context of the climate crisis, this is not merely economic benefit but a potentially critical tool for sustainable development.

  1. Real-time marketing personalization—increased conversion through contextual recommendations
  2. Medical diagnostics—earlier tumor detection, treatment optimization
  3. Autonomous systems—liberation from dangerous and monotonous work
  4. Accelerated scientific discovery—shortened cycle from idea to prototype
  5. Creative problem-solving—unconventional solutions under uncertainty
  6. Scalable expertise—access to knowledge for billions of people
  7. Resource optimization—reduced environmental footprint through predictive analytics

šŸ”¬Evidence Base: What Research Says About AI's Real Capabilities and Limitations

Moving from promises to facts, it's necessary to analyze what exactly is confirmed by empirical data and what remains speculation. Research demonstrates both impressive achievements and systemic limitations that rarely make headlines. More details in the AI and Technology section.

šŸ“Š Marketing Personalization: Effectiveness vs. Manipulation

Research (S003) confirms that AI is indeed capable of analyzing behavioral data and providing personalized recommendations in real time. However, the same work warns: personalization effectiveness depends on system opacity.

Once a user realizes their behavior is being predicted and directed by an algorithm, reactive resistance emerges. Personalization can amplify cognitive biases: if an algorithm optimizes for engagement, it shows content that confirms existing beliefs (echo chamber effect), even when this contradicts the user's long-term interests.

Algorithmic biases remain an unresolved problem at the system architecture level. Research (S003) notes the risks but offers no mechanisms for their elimination.

🧪 AI in Medicine: Diagnostic Accuracy vs. Decision Opacity

AI application in oncology (S001) demonstrates impressive results in medical image analysis. The critical problem is the opacity of neural network decisions: a physician cannot explain why the algorithm classified a formation as malignant.

This creates legal and ethical risks. If the diagnosis proves incorrect, who bears responsibility—the developer, the institution, or the physician who trusted the system?

  1. Algorithms are trained on historical data containing systematic biases
  2. Insufficiently representative samples lead to errors for underrepresented groups
  3. Facial recognition algorithms show higher error rates for Black individuals
  4. Medical AI underestimates risks for women if trained predominantly on male data

🧬 Autonomous Systems: Adaptability vs. Unpredictability

Research (S005) emphasizes that autonomous systems must handle uncertainty through creativity. But this creates a new problem: how to guarantee the safety of a system that can generate unpredictable decisions?

Autonomous vehicles in moral dilemma situations are a classic example. An algorithm may make a decision that's optimal from the standpoint of minimizing casualties but unacceptable from the standpoint of human ethics.

Parameter Capability Risk
Adaptability System handles uncertainty Decisions become unpredictable
Optimization Minimizing casualties in critical situations Conflict with human ethics
Explainability System operates efficiently We cannot explain its decisions
As systems grow more complex, we may face situations where AI makes decisions we can neither predict nor explain after the fact. Research (S008) warns: non-human intelligence may construct decisions in ways where any justification lies beyond what humans are inclined to notice or understand.

🧾 Generative Models: Democratization of Knowledge vs. Spread of Disinformation

Research (S006) analyzes the capabilities and risks of generative models like ChatGPT. On one hand, these systems provide access to information at an unprecedented level. On the other, they generate convincing-sounding but factually incorrect content (the "hallucination" phenomenon).

They can be used to create deepfakes, phishing attacks, and mass production of disinformation. The critical problem is the absence of verification mechanisms: users cannot easily distinguish correct answers from plausible fabrications, especially in areas without expertise.

Hallucination
Generation of convincing-sounding but factually incorrect content. Creates risk of eroding trust in information generally.
Verification
Absence of built-in mechanisms for checking accuracy. If any text can be AI-generated, how do we identify credible sources?
Scalability of Disinformation
Generative models enable industrial-scale disinformation production, exceeding human verification capabilities.

The connection between these limitations and broader cognitive security issues is revealed in the analysis of AI ethics and safety. Systematic biases in medical algorithms echo historical errors, as shown in research on AI physiognomy and the return of phrenology.

The AI Evidence Paradox
Visualization of the contradiction between AI's high accuracy in controlled conditions and unpredictability in real scenarios with incomplete information and ethical dilemmas

🧠Mechanisms of Causality: Correlation, Confounders, and the Illusion of Understanding

One of the key problems with modern AI is the confusion between correlation and causality. Machine learning algorithms identify statistical patterns in data but don't understand cause-and-effect relationships. More details in the Sources and Evidence section.

This creates a risk of false conclusions and ineffective interventions, especially when decisions are made based on correlations without verifying the underlying mechanisms.

šŸ” The Confounder Problem: When Correlation Misleads

Classic example: an algorithm discovers that people who buy a certain product make repeat purchases more frequently. Marketers interpret this as proof of brand loyalty and invest in promotion.

However, the real cause may be a third variable (confounder)—for example, high customer income, which correlates with both product purchase and overall purchase frequency. Promoting the product won't increase loyalty if the true cause is income.

Observed Correlation Assumed Cause Actual Confounder Result of Incorrect Inference
Product purchase → repeat purchases Brand loyalty High income Marketing investments yield no effect
Medication → health improvement Drug efficacy Young age, healthy lifestyle Prescription of ineffective treatment
Education → high income Knowledge and skills Social capital, connections Overestimation of formal education's role

(S003) notes that AI can analyze data and provide recommendations but doesn't specify how systems distinguish between correlation and causality. This is a fundamental limitation: without experimental design (randomized controlled trials), it's impossible to establish causal relationships.

Most commercial AI systems work with observational data, where confounders remain invisible to the algorithm.

🧬 The Illusion of Understanding: Why AI Explanations Can Be False

Even when developers attempt to make AI "explainable" (explainable AI, XAI), explanations may be artifacts that don't reflect the system's actual logic. The LIME (Local Interpretable Model-agnostic Explanations) method creates a simplified model that approximates the behavior of a complex neural network in a local region.

But this approximation can be inaccurate, and the user gets an illusion of understanding that doesn't correspond to reality. The explanation looks logical, but it's the logic of the simplified model, not the system itself.

(S008) warns that non-human intelligence may construct solutions whose justification lies beyond human comprehension. As systems grow more complex, the gap between AI's actual logic and our explanations widens, creating a risk of catastrophic errors we won't be able to anticipate or correct.

This is especially dangerous in critical domains where decisions affect people's lives.

🧷 Feedback Loops and Self-Reinforcing Cycles

A critical problem rarely discussed in public discourse: AI doesn't just analyze data—it changes the environment, generating new data on which it will train in the future.

Recommendation Systems
Show content they believe the user will like. The user interacts with the content, generating new data that confirms the original assumption. Result: a self-reinforcing cycle that entrenches biases and limits diversity of experience.
Marketing and Personalization
(S003) shows that personalization may not expand but narrow consumer choice, showing them only what the algorithm considers relevant based on past behavior.
Medical Diagnosis
Algorithms will get better at diagnosing diseases already well-represented in the data, and worse at rare pathologies, creating systematic blindness.

These cycles create an illusion of quality improvement (the system becomes more accurate), but in reality the system adapts to its own errors, deepening them.

The connection to AI myths is obvious: the myth of "algorithm neutrality" ignores that each system decision changes the data it trains on, turning AI into an active participant in shaping reality rather than a passive observer.

āš ļøConflicts and Uncertainties: Where Sources Diverge and What It Means

Analysis of the sources reveals several areas where researchers have not reached consensus. These disagreements are critical for evaluating the reliability of AI promises. More details in the Thinking Tools section.

🧩 Disagreement 1: Achievability of Artificial General Intelligence (AGI)

Research (S002) examines various approaches to creating AGI—logicist, emergent, and universalist—but does not conclude which is most promising or even feasible.

This points to fundamental uncertainty: we don't know whether AGI is possible in principle, and if so, on what timeline. Predictions range from "within 10 years" to "never."

Any long-term promises based on AGI remain speculative until the basic uncertainty about its achievability is resolved.

šŸ•³ļø Disagreement 2: Balance Between Personalization and Manipulation

Research (S003) acknowledges that personalization can cause user discomfort if they realize they're interacting with a bot, but offers no clear criteria for distinguishing ethical personalization from manipulation.

Where is the line between "helping with decision-making" and "directing behavior in the company's interest"? The lack of consensus means regulation lags behind technology.

Aspect Personalization (assistance) Manipulation (control)
Goal Matching user preferences Changing behavior in company's interest
Transparency User knows about the mechanism Mechanism is hidden or obscured
Control User can disable/modify User cannot effectively counteract

Companies de facto define ethical standards themselves, as regulatory frameworks are absent. Related issues are examined in AI ethics and safety.

🧾 Disagreement 3: Accountability for Autonomous System Decisions

None of the reviewed studies offers a convincing solution to the accountability problem. If an autonomous vehicle makes an error, who is at fault—the manufacturer, algorithm developer, company that provided training data, or owner?

  1. Manufacturer is responsible for system integration and testing
  2. Algorithm developer is responsible for model quality
  3. Data provider is responsible for representativeness and cleanliness of training set
  4. Owner is responsible for proper use and maintenance
  5. Regulator is responsible for establishing standards and oversight

Research (S005) emphasizes the need for creative problem-solving under uncertainty, but does not discuss how to distribute responsibility for unpredictable decisions. Current legislation is not prepared for this challenge.

🧩Cognitive Anatomy of the Myth: Which Psychological Mechanisms Are Exploited in AI Discourse

AI promises work not because the technology is impressive — but because they exploit predictable cognitive traps. Recognizing the mechanism means building a defense. More details in the Financial Pyramids and Scams section.

  1. Halo effect: one impressive achievement (ChatGPT writes code) transfers to the entire class of systems. Result: expectation of universality where there is narrow specialization.
  2. Appeal to authority: if a major company or famous scientist talks about revolution, critical thinking shuts down. Apologetics substitutes for analysis.
  3. Narrative of inevitability: "it will happen, it's only a matter of when" blocks the question "should this happen and under what conditions".
  4. Social proof: if everyone talks about AI as a miracle, dissonance with reality is suppressed by silence.

Each mechanism works independently, but together they create a cognitive shield through which facts cannot pass.

The AI myth is sustained not by evidence, but by psychological economy: it's easier to believe in revolution than to understand what the system actually does and doesn't do.

This is not manipulation in the classical sense — it's the natural result of information asymmetry. Those who understand the technology often can't explain it simply. Those who explain simply often don't understand the details.

Cognitive immunity
The ability to notice when emotion substitutes for analysis, when authority substitutes for evidence, when narrative substitutes for fact. This is not skepticism — it's mental hygiene.

Developing this immunity requires one thing: the habit of asking not "does this sound plausible", but "what exactly will happen, who will measure it, and how will I know it worked". AI myths crumble at first contact with this question.

āš”ļø

Counter-Position Analysis

Critical Review

āš–ļø Critical Counterpoint

The article can be challenged in several directions. Below are the main objections worth considering when evaluating its arguments.

Focus on risks ignores measurable benefits

The focus on ethical problems of AI may create an impression of technophobia, although the technology's real successes in medicine, logistics, and science are already documented. Many concerns remain theoretical, while practical benefits are measurable and growing.

AI opacity has been partially overcome

The claim about a "black box" is outdated: the development of explainable AI (XAI) and neural network interpretation methods (LIME, SHAP) makes decisions more transparent. Although full interpretability remains a challenge, progress is significant.

Sources don't reflect the current state

Reliance on sources from 2019–2022 is insufficient for a rapidly changing field. Breakthroughs of 2023–2025 (GPT-4, Gemini, multimodal models) have substantially changed the balance of risks and opportunities.

Personalization is not always manipulation

The critical tone regarding marketing use of AI may underestimate user autonomy. Not all personalized recommendations are manipulation — many people consciously choose convenience in exchange for data.

Lack of quantitative metrics weakens the argumentation

There is no precise data on the frequency of algorithmic biases and their real impact, which makes some claims vulnerable to accusations of alarmism. More rigorous metrics are needed to assess the scale of the problem.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

Yes, research confirms this. According to an article in the Journal of the Academy of Marketing Science (2020), AI will substantially transform marketing strategies and consumer behavior, including business models, sales processes, and customer service (S003). AI can analyze data in real-time and provide personalized recommendations for the next product or optimal pricing. However, the authors warn of risks: if customers discover they are interacting with a bot, this may cause discomfort and negative consequences (S003).
The main issues are data privacy, algorithmic bias, and ethics. AI may not fulfill all its promises due to challenges related to privacy, systematic errors in algorithms, and ethical dilemmas (S003). Algorithms can reproduce and amplify existing social prejudices if trained on biased data. Additionally, the opacity of AI decisions (the "black box" problem) makes it difficult to understand why a system made a particular decision, which is critical in medicine, law, and finance.
AGI is a hypothetical AI capable of performing any intellectual task available to humans. Research by Bennett and Maruyama (2021) defines AGI as a system that can function as an "artificial scientist," combining logicist, emergent, and universalist approaches (S002). Unlike narrow AI, which solves specific tasks (facial recognition, chess), AGI would possess general capacity for learning, reasoning, and adaptation in any domain. Currently, AGI remains a theoretical goal, and there is no consensus on the timeline for its achievement.
Through Creative Problem Solving (CPS). According to a review by Gizzi et al. (2022), CPS is a subfield of AI focusing on methods for solving anomalous or non-routine problems in autonomous systems (S005). The emergence of increasingly autonomous systems necessitates AI agents to handle environmental uncertainty through creativity (S005). This includes the ability to generate novel solutions, adapt to unforeseen situations, and use analogies from other domains.
Yes, this is a real risk. Research on superintelligence and the Fermi paradox (2021) warns: any non-human intelligence may construct solutions in such a way that the rational justification for their behavior (and consequently, the meaning of their signals) lies beyond what humans are inclined to notice or understand (S008). This means advanced AI may operate according to logic inaccessible to human understanding, creating risks of unpredictable behavior and loss of control.
Personal customer data, purchase history, behavioral patterns. Research by Davenport et al. (2020) indicates that AI can be used to analyze such data and provide personalized recommendations in real-time (S003). This includes data on product views, clicks, time on site, social interactions, and demographic information. However, collecting and using such data raises questions of privacy and user consent.
Due to cognitive biases and marketing narratives. Statements like "AI will make our lives better" (Mark Zuckerberg, cited in S003) create an optimistic frame that activates confirmation bias—people seek information confirming the desired future. Additionally, technology companies are interested in positive narratives to attract investment and users. The complexity of the technology creates information asymmetry: most people don't understand how AI works and rely on authoritative statements.
Three main ones: logicist, emergent, and universalist. Bennett and Maruyama (2021) describe the logicist approach as based on formal logic and symbolic computation; emergent—on learning through environmental interaction (like neural networks); universalist—on creating universal algorithms capable of self-improvement (S002). The authors conclude that creating an "artificial scientist" requires a hybrid or unified approach combining these methods.
Ask three questions: 1) Who owns the data the AI was trained on? 2) What metrics does the system optimize (engagement, profit, my well-being)? 3) Can I disable personalization and see "neutral" content? If the system doesn't disclose data sources, optimizes platform metrics (rather than user metrics), and doesn't provide control over personalization—the likelihood of manipulation is high. Research by Davenport et al. (2020) emphasizes that detecting interaction with a bot can cause discomfort and negative consequences (S003), highlighting the importance of transparency.
Yes, especially in oncology. Source S001 is dedicated to AI application in oncology (past, present, and future), indicating active use of the technology for diagnosis, prognosis, and treatment personalization. AI analyzes medical images (CT, MRI) to detect tumors, predicts therapy response based on genetic data, and assists in radiation therapy planning. However, it's important to remember: AI is a tool for supporting physician decisions, not a replacement for clinical judgment.
Partially, but not completely. AI can generate text, images, and music (examples: GPT, DALL-E, Midjourney), but its "creativity" is based on recombining patterns from training data rather than genuine understanding or emotional experience. Research by Gizzi et al. (2022) shows that creative problem-solving in AI focuses on adapting to anomalous situations (S005), not on creating fundamentally new concepts. Human creativity involves intuition, cultural context, and the ability to transgress norms—qualities that AI cannot yet replicate.
Unpredictability, errors in non-standard situations, and ethical dilemmas. Autonomous systems (self-driving cars, drones) must make decisions under uncertainty, which requires creative problem-solving (S005). However, when a system encounters a situation not represented in training data, it may act unpredictably. Additionally, ethical questions arise: how should a self-driving car act in an unavoidable accident scenario (the "trolley problem")? Who bears responsibility for AI errors—the developer, the owner, or the system itself?
AI requires massive amounts of data for training, creating risks of breaches and misuse. Davenport et al. (2020) note that challenges related to data privacy may prevent AI from fulfilling all its promises (S003). Personalization requires collecting detailed information about behavior, preferences, and even biometric data. This creates risks: data breaches, unauthorized third-party access, and use of data for purposes users haven't consented to. Regulations (GDPR, CCPA) attempt to limit these risks, but technology often outpaces legislation.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

ā˜…ā˜…ā˜…ā˜…ā˜…
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

ā˜…ā˜…ā˜…ā˜…ā˜…
Author Profile
// SOURCES
[01] How artificial intelligence will change the future of marketing[02] Toward understanding the impact of artificial intelligence on labor[03] Opinion Paper: ā€œSo what if ChatGPT wrote it?ā€ Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy[04] A Systematic Review of the Literature on Digital Transformation: Insights and Implications for Strategy and Organizational Change[05] The Road Towards 6G: A Comprehensive Survey[06] Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing[07] Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next[08] Formal Methods: State of the Art and Future Directions

šŸ’¬Comments(0)

šŸ’­

No comments yet