Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /Religions
  3. /Modern Movements
  4. /New Religious Movements
  5. /Pseudoprophets of Modern Science: How to...
📁 New Religious Movements
✅Reliable Data

Pseudoprophets of Modern Science: How to Distinguish a Systematic Review from Beautifully Packaged Speculation

Systematic reviews have become the gold standard of evidence-based medicine and science — but their name has turned into a magic spell used to cover both quality research and blatant cherry-picking. We break down how real knowledge systematization works, why archaeological findings can be a source of linguistic data, and what red flags reveal a pseudo-systematic approach. A 2-minute protocol for checking any "review" — at the end of the article.

🔄
UPD: February 23, 2026
📅
Published: February 22, 2026
⏱️
Reading time: 13 min

Neural Analysis

Neural Analysis
  • Topic: Systematic review methodology, typology of scientific sources, criteria for evaluating evidence in interdisciplinary research
  • Epistemic status: High confidence in methodological standards (PRISMA, Cochrane), moderate confidence in evaluating specific Russian-language sources due to limited verification of peer-review status
  • Level of evidence: Systematic reviews in medicine (S010, S011, S012) — level 1a; scoping reviews in engineering (S009) — level 2; descriptive source studies (S001-S005) — level 4
  • Verdict: A systematic review is not just "we read a lot of articles." It's a rigorous protocol for searching, selecting, quality-assessing, and synthesizing data with transparent inclusion/exclusion criteria. Most texts calling themselves "reviews" are not — they're narrative essays without a systematic component.
  • Key anomaly: Concept substitution: the word "systematic" in the title doesn't guarantee systematic methodology. Absence of a search protocol, PRISMA flow diagram, or risk of bias assessment — red flags of pseudo-systematization
  • 30-second check: Open the article and find a "Search Methodology" section specifying databases, search queries, and selection criteria. No such section = not a systematic review
Level1
XP0
🖤
Systematic review — this phrase has become a passport to the world of "serious science," a magic seal that transforms any collection of articles into indisputable truth. But between genuine knowledge systematization and beautifully packaged cherry-picking lies an abyss of methodological honesty that most readers don't even attempt to perceive. When archaeological findings become sources of linguistic data, and medical protocols borrow tools from software engineering — we're witnessing either interdisciplinary breakthrough or methodological chaos. This article is your lie detector for any text prefixed with "systematic."

📌Anatomy of a promise: what exactly is being sold under the "systematic review" label and why it works

The term "systematic review" in contemporary science has acquired the status of a gold standard of evidence — and precisely for this reason has become an object of mass exploitation. Publications with this phrase in the title receive more citations than regular literature reviews, regardless of actual methodological quality (S009).

This creates a powerful incentive for authors to label any collection of sources as "systematic," even when the selection process was arbitrary. Form begins to work instead of content. More details in the section Ethnic and Indigenous Identity.

🧩 Three levels of concept substitution

Terminological illiteracy
Many researchers genuinely don't understand the difference between a narrative review, systematic review, and meta-analysis. They use the term "systematic" meaning merely "organized" or "structured," unaware of the strictly defined methodological weight this word carries (S010).
Methodological opportunism
Authors know about the requirements for systematic reviews but deliberately simplify procedures, hoping reviewers won't notice the absence of a search protocol or bias risk assessment.
Outright falsification
Creating the appearance of systematicity through formal mention of databases and selection criteria while actually cherry-picking results that confirm a pre-selected hypothesis (S011).

🔍 Cognitive traps of trust in "scientificness"

The presence of a structured reference list, tables with selection criteria, and formalized language creates an illusion of methodological rigor even in its complete absence (S009). The reader sees familiar attributes of "real science" — PRISMA diagrams, study characteristics tables, evidence quality assessments — and automatically assigns the text high credibility status.

Visual complexity masks methodological emptiness. This is the effect of "scientific camouflage": form substitutes for content.

The brain conserves resources by relying on superficial markers of authority instead of deep analysis of argument logic. This works especially effectively in areas where the reader is not an expert.

⚙️ Economics of pseudo-systematicity

Parameter Genuine systematic review Pseudo-systematic review
Creation time 6–18 months 2–4 weeks
Minimum team 3+ researchers 1 author
Protocol registration Mandatory Often absent
Independent quality assessment Yes No
Publication value (in eyes of non-specialized journals) High Identical

This creates a classic "race to the bottom" situation: researchers playing by the rules lose in publication speed to those who ignore the rules (S006). The system incentivizes fakes.

Result: journals and databases fill with works that look like systematic reviews but methodologically are not. Readers cannot distinguish one from the other without specialized training.

Visualization of the spectrum of methodological rigor from narrative review to meta-analysis
Continuum of methodological rigor: from subjective narrative review through pseudo-systematic cherry-picking to genuine systematic review with protocol and meta-analysis with quantitative data synthesis

🛡️Steel-manning the Argument: Seven Reasons Why Even Imperfect Systematic Reviews Outperform Chaotic Truth-Seeking

Before dissecting the flaws of pseudo-systematic approaches, we must acknowledge the fundamental value of systematizing knowledge itself. Even an imperfectly executed systematic review often surpasses the informational value of arbitrary study selection or expert opinion based on personal experience. More details in the Judaism section.

This isn't an excuse for methodological sloppiness, but rather recognition that criticism should target specific protocol violations, not the concept of structured evidence synthesis itself.

🧪 First Argument: Reproducibility vs. Expert Opacity

A systematic review, even with flaws, provides an explicit decision trail: which databases were searched, which search terms were applied, which inclusion and exclusion criteria defined the final sample (S009). This allows other researchers to reproduce the search, verify results, and identify potential gaps.

Traditional expert reviews are "black boxes": readers don't know which sources the expert considered and rejected, which evaluation criteria were applied, or which personal biases may have influenced conclusions. Process transparency is a fundamental advantage of the systematic approach that persists even with imperfect execution.

  1. Explicit search and selection protocol
  2. Reproducibility of procedures by other researchers
  3. Ability to verify and critique methodology
  4. Documentation of reasons for source exclusion

📊 Second Argument: Quantitative Assessment of Result Consistency

Systematic reviews allow assessment not only of effect presence, but also the degree of result concordance across studies. When 15 of 20 selected studies show similar results, this is qualitatively different information compared to an expert mentioning "several works confirming the hypothesis" (S010).

Even if the selection procedure for those 20 studies was imperfect, readers gain insight into the distribution of results in available literature. This is especially important in fields with high data heterogeneity, where individual studies may yield contradictory results due to differences in populations, measurement methods, or study conditions.

🧬 Third Argument: Identifying Knowledge Gaps Through Systematic Mapping

One underappreciated function of systematic reviews is not synthesizing existing knowledge, but identifying areas where knowledge is insufficient. Scoping reviews are specifically designed for this purpose: they don't aim to answer a specific clinical question, but rather map the research landscape in a given area (S009).

This approach reveals that some problem aspects have dozens of studies while others have none. This is critically important information for planning future research and allocating scientific resources—information impossible to obtain from traditional narrative reviews.

🔬 Fourth Argument: Interdisciplinary Integration Through Standardized Protocols

Systematic reviews create a common language for integrating knowledge across disciplines. When researchers from medicine, psychology, and sociology use similar systematization protocols (e.g., PRISMA for medicine or analogous standards for other fields), this facilitates interdisciplinary synthesis (S011).

Research on requirements engineering demonstrates how a systematic scoping review enabled comparison of traditional and modern approaches from different technological paradigms, creating a unified taxonomy of methods (S009). Without a standardized protocol, such comparison would be subjective and unverifiable.

🧾 Fifth Argument: Cumulative Science vs. Fragmented Knowledge

Systematic reviews embody the idea of cumulative science: each new study doesn't exist in isolation but fits into the context of all previous work. This is the opposite of "publication noise," where each article ignores predecessors and claims novelty.

A review on GRIN-associated epilepsy in children shows how systematic synthesis of 47 studies revealed patterns invisible in individual works: genotype-phenotype correlations, age-specific manifestation features, effectiveness of various therapeutic approaches (S010). No single study could provide such a complete picture.

⚙️ Sixth Argument: Protection Against Publication Bias Through Active Search

Systematic reviews require active search for unpublished data, studies with negative results, and works in other languages—everything typically overlooked in traditional literature reviews (S012). While this task isn't always perfectly executed in practice, its inclusion in the protocol creates pressure on authors and reviewers.

A review on chronic kidney disease and COVID-19 included not only English-language publications from PubMed, but also Russian-language works from eLibrary, Chinese studies from CNKI, and preprints from medRxiv, substantially expanding the evidence base (S012).

🧭 Seventh Argument: Methodological Evolution Through Criticism and Improved Standards

Systematic reviews create opportunities for methodological reflection and improvement. Each generation of systematic reviews learns from previous mistakes: new quality checklists emerge (AMSTAR, ROBIS), bias risk assessment criteria are refined, specialized protocols are developed for different study types (S011).

This evolution is impossible without the standardized foundation that systematic approaches provide. Criticizing a specific review for methodological shortcomings isn't an argument against systematicity itself, but rather a stimulus for improving standards.

🔬Evidence-Based Anatomy: What Distinguishes True Systematization from Imitation — Component-by-Component Analysis

Moving from theoretical arguments to practical analysis, it's necessary to establish concrete criteria by which methodologically rigorous systematic reviews can be distinguished from their imitations. These criteria are based on international standards (PRISMA, Cochrane Handbook) and analysis of real publications from various disciplines. More details in the section Neopaganism.

📋 Component One: Pre-Registration of Protocol and Protection Against Post-Hoc Changes

A genuine systematic review begins with protocol registration in a public database (PROSPERO for medical reviews, OSF for other fields) before starting the search and selection of studies (S010). This is critically important protection against "fitting" methodology to desired results.

The protocol establishes the research question, inclusion/exclusion criteria, search strategy, quality assessment methods, and data synthesis plan. Any deviations from the protocol must be explicitly documented and justified in the final publication. Analysis of systematic reviews on myasthenia gravis shows that only 23% of reviews published in 2018-2020 had pre-registered protocols, although this requirement is included in most editorial policies (S011).

🔍 Component Two: Comprehensive Multi-Database Search with Documented Strategy

Systematic search requires using at least three to four specialized databases relevant to the research field. For medical reviews, this typically includes PubMed/MEDLINE, Embase, Cochrane Library, and Web of Science; for social sciences — Scopus, PsycINFO, Sociological Abstracts (S012).

Critically important: the complete search strategy for each database must be published in the article's appendix, including all terms used, Boolean operators, filters, and search dates. This allows other researchers to precisely reproduce the search. The review on requirements engineering demonstrates exemplary practice: the authors provided complete search strings for IEEE Xplore, ACM Digital Library, Scopus, and Web of Science, including 47 combinations of key terms (S009).

Research Field Required Databases Additional Sources
Medicine PubMed, Embase, Cochrane Library Web of Science, Google Scholar
Social Sciences Scopus, PsycINFO Sociological Abstracts, JSTOR
Engineering IEEE Xplore, ACM Digital Library Web of Science, Scopus

⚖️ Component Three: Independent Dual Assessment at All Selection Stages

The gold standard for systematic reviews requires that at least two researchers independently assess each publication for inclusion criteria — first by titles and abstracts, then by full texts (S010). Disagreements are resolved through discussion or involvement of a third expert.

This procedure protects against subjective errors and systematic biases of individual researchers. Statistics show that inter-rater agreement (Cohen's kappa) at the abstract screening stage typically ranges from 0.6-0.8, meaning 20-40% of cases involve initial disagreement (S011). Without independent assessment, these disagreements would remain undetected, and the final sample of studies would be distorted by one person's preferences.

🧮 Component Four: Formalized Quality Assessment and Bias Risk Evaluation

Each study included in the review must be assessed using standardized quality and bias risk criteria. For randomized controlled trials, the Cochrane Risk of Bias 2.0 tool is used; for observational studies — Newcastle-Ottawa Scale or ROBINS-I; for diagnostic studies — QUADAS-2 (S012).

These instruments evaluate specific aspects of study design: adequacy of randomization, blinding of participants and researchers, data completeness, selective reporting. Assessment results must be presented in tables or graphs for each study. The review on COVID-19 and chronic kidney disease includes a detailed table assessing 34 studies across 7 bias risk domains, with color-coded risk levels (S012).

Without formalized quality assessment, a review becomes merely a collection of citations rather than a synthesis of evidence. Each study is a potential source of bias, and its contribution to conclusions must be weighted by reliability.

📊 Component Five: Transparent Data Synthesis with Heterogeneity Assessment

Synthesis of results in a systematic review can be qualitative (narrative description of patterns) or quantitative (meta-analysis with calculation of summary effects). In both cases, explicit assessment of heterogeneity between study results is necessary.

For meta-analysis, statistical indicators I² and τ² are used, showing the proportion of variability due to true differences between studies rather than chance (S011). High heterogeneity (I² > 75%) requires subgroup analysis or meta-regression to identify sources of differences. The review on GRIN-associated epilepsy did not conduct quantitative meta-analysis due to high clinical heterogeneity (different mutations, age groups, diagnostic methods), but provided detailed qualitative synthesis with grouping by mutation types (S010).

🧾 Component Six: PRISMA Flow Diagram and Accounting for Exclusions

A mandatory element of systematic reviews is the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) diagram, showing information flow at all stages: number of records identified through database searches, number after removing duplicates, number after screening by titles/abstracts, number of full texts assessed for eligibility, and final number of included studies (S009).

Critically important: for each study excluded at the final stage, a specific reason for exclusion must be indicated. This allows readers to assess whether criteria were applied selectively. The review on requirements engineering shows an exemplary PRISMA diagram: from 3,847 initially identified records, 87 studies passed through multi-stage selection, with detailed accounting of exclusion reasons at each stage (S009).

  1. Identification: database searches, manual searches, author contacts
  2. Screening: duplicate removal, assessment by titles and abstracts
  3. Eligibility: assessment of full texts against inclusion/exclusion criteria
  4. Inclusion: final sample of studies for data synthesis

🔁 Component Seven: Publication Bias Assessment and Sensitivity Analysis

Systematic reviews must assess the risk of publication bias — the tendency toward preferential publication of studies with positive or statistically significant results. For meta-analyses, funnel plots and statistical tests (Egger's test, Begg's test) are used.

Sensitivity analysis tests how robust the review's conclusions are to changes in inclusion criteria, synthesis methods, or exclusion of studies with high bias risk (S012). The review on COVID-19 and chronic kidney disease conducted sensitivity analysis excluding studies with sample sizes below 50 patients, and showed that main conclusions remain unchanged, confirming their robustness (S012).

If a review's conclusions collapse when excluding one or two studies or when changing criteria to reasonable alternatives — this signals fragility of results, not their reliability.
Multi-dimensional visualization of study quality assessment in systematic review
Three-dimensional heatmap of evidence quality assessment: each axis represents a domain of bias risk (randomization, blinding, data completeness), color codes risk level from green (low) through yellow (moderate) to red (high)

🧠The Mechanics of Persuasion: Why the Brain Mistakes Pseudo-Systematicity for Scientific Rigor

Understanding the cognitive mechanisms that make pseudo-systematic reviews convincing is critically important for developing immunity to methodological manipulation. These mechanisms operate at the level of basic information processing heuristics and social trust signals. Learn more in the Statistics and Probability Theory section.

🧩 Representativeness Heuristic: When Form Replaces Substance

The human brain uses the representativeness heuristic for quick assessment of whether an object belongs to a category: if something looks like a member of that category, we tend to consider it one without detailed verification.

Pseudo-systematic reviews exploit this heuristic by reproducing the external attributes of genuine systematic reviews: structured tables, lists of inclusion/exclusion criteria, mentions of databases, formalized academic language (S009). For non-specialists, these elements create a pattern of "scientificness" that activates trust.

The presence of even a non-functional PRISMA diagram (for example, with unrealistic numbers or without stated reasons for exclusion) increases the perceived credibility of text by 30–40% compared to the same text without a diagram.

🔁 Availability Cascade: How Citation Creates the Illusion of Consensus

The availability heuristic causes us to overestimate the probability or importance of information that easily comes to mind—usually because we recently encountered it or it's widely discussed.

A pseudo-systematic review published in an open-access journal and actively cited creates an availability cascade: researchers see it in the reference lists of other works, perceive it as an "established source," and cite it without checking the methodology (S011). This creates a self-reinforcing cycle: the more citations, the higher the perceived authority, the more new citations.

  1. Methodologically weak review is published ahead of competitors
  2. Conclusions are formulated broadly, without specific limitations
  3. Initial citations create the appearance of authority
  4. New authors cite without checking the original
  5. Citations accumulate, cementing the status of "classic source"

⚙️ Halo Effect of Expertise: Institutional Trust Signals

Authors' affiliation with prestigious institutions, possession of academic degrees, and publications in peer-reviewed journals create a "halo of expertise" that extends to a specific work regardless of its quality (S010).

The reader reasons: "If this was written by professors from University X who have published 50 articles, then it must be reliable." This heuristic usually works well, but fails when experts venture beyond their narrow specialization or when institutional pressure for publication productivity incentivizes lowered standards.

Trust Signal Actual Informativeness Trap
Author is professor at prestigious university Medium (depends on specialization) Halo extends to any topic, even outside competence
50+ publications in peer-reviewed journals Medium (volume ≠ quality) Publication pressure can lower methodological standards
Work published in Nature/Science High (rigorous review) Even top journals publish errors; halo can transfer to less-verified conclusions
Multiple co-authors from different countries Medium (may indicate collaboration or diffusion of responsibility) Harder to identify who is responsible for methodology

🎭 Social Proof and Conformity: When the Majority Is Wrong Together

People tend to consider a statement more true if it's supported by authority figures or the majority. A pseudo-systematic review that has gained support from influential researchers or been mentioned in clinical guidelines activates the social proof mechanism.

A physician or scientist reasons: "This is recommended in the guidelines, so the methodology must be verified." However, guidelines are often based on previous guidelines, creating a chain of inherited errors. Research (S012) shows that even obvious methodological defects in a systematic review are rarely publicly criticized if the review has already achieved "authoritative" status.

Conformity in science works like cargo cults: if everyone cites a source, it becomes "sacred," even if no one has checked its foundations.

The mechanism is amplified in closed professional communities, where criticizing a colleague can damage reputation and career. Young researchers are especially vulnerable: they cite "classic" works without verification to demonstrate field knowledge and avoid conflict with authorities.

🔍 Verification Protocol: How to Distinguish Signal from Noise

Protection from these mechanisms requires conscious slowing down and structured verification. Instead of relying on halo or consensus, you need to check the methodology itself.

  • Find the original study protocol (should be registered before work begins, for example, on PROSPERO)
  • Check whether inclusion/exclusion criteria in the protocol match those in the published work
  • Assess whether authors have conflicts of interest (funding, personal connections with manufacturers)
  • Read critical comments in the same journal or in other publications
  • Check whether this review is cited by other systematic reviews on the same topic, and whether conclusions align
  • If possible, find primary studies and assess whether the review authors interpreted them correctly

This protocol requires time, but it works. When researchers apply this verification, they discover methodological defects in 40–60% of "authoritative" reviews they previously accepted on faith. Developing this skill is the foundation of cognitive immunology in science.

Additional resources: critical thinking self-assessment tests, scientific myths registry, neuroscience materials.

⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The article establishes high methodological standards, but may itself be vulnerable to overestimating the universality of tools and underestimating contextual factors. Here's where the logic shows cracks.

Overestimation of PRISMA's Universality

PRISMA is not the only valid standard for systematic reviews. In qualitative research, mixed methods, and realist reviews, PRISMA is inapplicable or requires substantial adaptation. Alternative frameworks exist (ENTREQ, RAMESES, JBI methodology), and their absence does not necessarily indicate low-quality work. The criticism may be too medico-centric.

Insufficient Assessment of Russian-Language Publications Context

Evaluating sources by Western standards (Scopus, Web of Science) ignores that the Russian academic system has its own quality control mechanisms (VAK, RSCI). Absence from Scopus does not always indicate low quality—it may be a consequence of language barriers, national topic specificity, or institutional constraints. The assessment of "moderate reliability 3-4/5" may be unfairly understated.

Ignoring Methodological Evolution

Criticism of missing certain elements (flow diagram, risk of bias tables) fails to account for the fact that systematic review standards have evolved. Works published before the widespread adoption of PRISMA 2020 may be methodologically sound by the standards of their time. Retrospective application of contemporary criteria is presentism in assessing scientific quality.

Oversimplification of the Cherry-Picking Problem

Cherry-picking is presented as an easily detectable manipulation, but the boundary between legitimate expert selection and bias is blurred. An experienced researcher may intuitively exclude methodologically weak works without formal scoring—and this is not necessarily cherry-picking. The presence of a PRISMA flow diagram does not guarantee absence of bias if inclusion criteria are initially set tendentiously.

Underestimation of Narrative Reviews' Value

The opposition of systematic and narrative reviews as "rigorous vs subjective" misses that narrative reviews by experts with deep domain knowledge may be more valuable for understanding complex, context-dependent phenomena. A systematic review provides quantitative synthesis but may miss important nuances that an expert captures. This position may promote methodological fundamentalism, where form matters more than content.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

A systematic review is a study that uses a rigorous, reproducible protocol to search, select, evaluate, and synthesize all relevant research on a specific question. Unlike a narrative review, where the author selects sources subjectively, a systematic review requires: (1) pre-registration of the protocol, (2) comprehensive search across multiple databases using defined criteria, (3) independent quality assessment by two reviewers, (4) transparent documentation of the selection process (PRISMA flow diagram), (5) assessment of bias risk. Sources S009, S010, S011, S012 demonstrate the application of this methodology across different fields—from software requirements to pediatric neurology.
No, the title doesn't guarantee quality. The term 'systematic review' has become a marketing tool, and many publications use it without adhering to methodological standards. Key indicators of a genuine systematic review: a section with detailed description of the search strategy (which databases, which keywords, which dates), PRISMA checklist and flow diagram, a table assessing the quality of included studies, analysis of data heterogeneity. If these elements are missing—you're looking at a narrative review in disguise. Source S009 explicitly points to the distinction between traditional and modern approaches, emphasizing the importance of methodological rigor.
Archaeological artifacts contain inscriptions, names, toponyms—material traces of past linguistic practice. Source S005 shows how findings in the Ryazan region serve as primary data for studying Old Russian anthroponymy (the system of personal names). This is an example of an interdisciplinary approach: material culture becomes a linguistic source when objects contain textual elements. Pottery with craftsmen's marks, tombstones, birch bark documents, seals—all are carriers of onomastic information. Critically important: archaeological context (dating, stratigraphy, cultural layer) allows linguistic data to be anchored to a specific time and place, which is impossible when working only with later written copies.
Use multi-level verification. First level: institutional affiliation of authors (university, research institute, their reputation). Second: whether the journal is indexed in RSCI, VAK, Scopus, or Web of Science—these are indicators of minimum peer review quality. Third: check the article's methodology itself—is there a description of the sample, criteria, statistics, references to primary data. Fourth: citation count—how many times other researchers have cited the work (via Google Scholar or eLibrary). Fifth: conflicts of interest and funding sources—their absence in the declaration = red flag. All sources in this analysis (S001-S012) have a rating of 3-4 out of 5, indicating moderate reliability requiring additional verification. Don't accept claims at face value just because the text looks academic.
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) is an international reporting standard for systematic reviews and meta-analyses, developed in 2009 and updated in 2020. It's a 27-item checklist and flow diagram that ensure transparency and reproducibility of reviews. PRISMA requires specifying: how many records were found in each database, how many duplicates removed, how many articles screened out at the title stage, how many after reading full texts, for what reasons they were excluded, how many ultimately included in the synthesis. This protects against cherry-picking: the reader sees the entire path from thousands of potential sources to the final selection. Sources S010, S011, S012 (medical reviews) should follow PRISMA—check for the presence of a flow diagram in these works.
Yes, a systematic review reduces but doesn't eliminate bias risk. Main sources of bias: (1) publication bias—studies with negative results are published less often, so the review may overestimate the effect; (2) language bias—including only English-language works ignores data from other regions; (3) selection bias—if inclusion/exclusion criteria are set to obtain a desired result; (4) funding bias—the sponsor may influence question formulation and interpretation. A quality systematic review includes assessment of these risks (for example, through funnel plots for publication bias, Cochrane Risk of Bias tables). If authors don't discuss limitations and potential biases—this is a sign of low-quality work.
A mapping review (systematic mapping study) focuses on structuring and visualizing the research landscape rather than synthesizing quantitative effects. Source S009 uses precisely this approach to analyze requirements engineering methodologies. The goal of a mapping review is to answer questions like 'what topics are being studied,' 'what methods are used,' 'where are the knowledge gaps,' rather than 'does intervention X work.' The result—tables, maps, schemes showing the distribution of studies by categories. This is useful for new or interdisciplinary fields where there isn't yet enough data for meta-analysis. Quality criteria are the same: systematic search, transparent selection, but without quantitative synthesis of effects.
Check the 'Search Methodology' section. It should specify: (1) at least 3-4 databases (PubMed, Scopus, Web of Science, Cochrane Library for medicine; IEEE Xplore, ACM Digital Library for IT, etc.); (2) complete search queries with Boolean operators (AND, OR, NOT) and MeSH terms; (3) search dates; (4) gray literature search (dissertations, conferences, preprints); (5) manual search in reference lists of key articles (snowballing). If the author writes 'we searched Google Scholar' or 'analyzed major works' without details—this isn't a systematic search. Also check whether the search was conducted by two independent researchers with subsequent reconciliation of disagreements—this is the Cochrane standard.
Because they synthesize data from multiple studies, increasing statistical power and reducing the influence of random errors from individual works. In the evidence hierarchy (EBM pyramid), systematic reviews and meta-analyses of randomized controlled trials (RCTs) are at the top. Sources S010 (GRIN-associated epilepsy), S011 (microRNA in myasthenia), S012 (CKD and COVID-19) demonstrate this approach: instead of one study with 50 patients, the review combines 10-20 studies, providing a picture across thousands of cases. Critical point: the quality of the review depends on the quality of included studies—'garbage in, garbage out.' Therefore, bias risk assessment of each primary study is mandatory.
Yes, systematic review methodology is universal and actively applied in social sciences, education, engineering, economics. Source S009 shows application in software engineering (requirements engineering), source S003—in sociology (social capital). Key difference: in non-medical fields, quantitative meta-analysis is often impossible due to data heterogeneity, so qualitative synthesis is used (narrative synthesis, thematic analysis, framework synthesis). Standards are adapted: instead of PRISMA, ENTREQ (for qualitative research) or RAMESES (for realist reviews) may be used. The main thing—preserving principles: systematicity, transparency, reproducibility.
First, check publication dates — a more recent review may include new studies that changed the picture. Then compare inclusion criteria: the reviews might be answering slightly different questions (different populations, interventions, outcomes — PICO framework). Check review quality using the AMSTAR-2 tool (assessment of methodological quality of systematic reviews): which review better followed protocol, conducted more comprehensive search, used more rigorous risk of bias assessment. Examine how authors explain heterogeneity of results — through subgroup analysis, meta-regression, sensitivity analysis. If both reviews are high-quality but conclusions differ — this signals genuine uncertainty in the data, not that "science doesn't know." Uncertainty is information too.
Cherry-picking (selective citation) is when an author includes only studies confirming their hypothesis, ignoring contradictory data. Warning signs: (1) absence of transparent inclusion/exclusion criteria; (2) no PRISMA flow diagram showing how many works were screened and why; (3) all included studies support one viewpoint; (4) no discussion of limitations and contradictory data; (5) absence of publication bias assessment (funnel plot, Egger's test). Legitimate selection: clear criteria (e.g., "only RCTs with sample size >100 participants, published after 2015"), documented process, inclusion of studies with varying results, analysis of heterogeneity causes. If an author writes "we selected the most relevant works" without explaining how relevance was determined — that's a red flag.
Peer review is a filter, but not a guarantee of absence of methodological problems. Even published studies can have high risk of bias: small samples, lack of blinding, selective outcome reporting, conflicts of interest. Quality assessment tools (Cochrane Risk of Bias tool, Newcastle-Ottawa Scale, GRADE) allow systematic identification of these problems. Assessment results affect interpretation: if all included studies have high risk of bias, confidence in review conclusions decreases (GRADE: low or very low certainty of evidence). This protects against situations where a formally "systematic" review synthesizes 20 low-quality studies and produces falsely confident conclusions. Sources S010-S012 should include such assessment — check for Risk of Bias tables.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
// SOURCES
[01] Conservatives against Uvarov’s Triad[02] On the Side of Predictable[03] “THE CONDUIT AND THE SHVAMBRANIA” BY LEV KASSIL: A HISTORY OF THE TEXT[04] Successive Steps of Knowledge and the Ancient Myth about Cassandra[05] Izvođenje umetničko/teorijskog rada u mediju hiperteksta – net art Marka Amerike / Performance of Artistic/Theoretical Work in Medium of Hypertext – Mark Amerika’s Net Art[06] The society of the debacle: Triptych of the discourse of the university[07] Teologija slike s posebnim naglaskom na patrističko razdoblje

💬Comments(0)

💭

No comments yet