Skip to content
Navigation
🏠Overview
Knowledge
🔬Scientific Foundation
🧠Critical Thinking
🤖AI and Technology
Debunking
🔮Esotericism and Occultism
🛐Religions
🧪Pseudoscience
💊Pseudomedicine
🕵️Conspiracy Theories
Tools
🧠Cognitive Biases
✅Fact Checks
❓Test Yourself
📄Articles
📚Hubs
Account
📈Statistics
🏆Achievements
⚙️Profile
Deymond Laplasa
  • Home
  • Articles
  • Hubs
  • About
  • Search
  • Profile

Knowledge

  • Scientific Base
  • Critical Thinking
  • AI & Technology

Debunking

  • Esoterica
  • Religions
  • Pseudoscience
  • Pseudomedicine
  • Conspiracy Theories

Tools

  • Fact-Checks
  • Test Yourself
  • Cognitive Biases
  • Articles
  • Hubs

About

  • About Us
  • Fact-Checking Methodology
  • Privacy Policy
  • Terms of Service

Account

  • Profile
  • Achievements
  • Settings

© 2026 Deymond Laplasa. All rights reserved.

Cognitive immunology. Critical thinking. Defense against disinformation.

  1. Home
  2. /Religions
  3. /VERA and Religion: Why Search Engines Co...
🛐 Religions
⚠️Ambiguous / Hypothesis

VERA and Religion: Why Search Engines Confuse a Japanese Telescope with the Philosophy of Faith — and What This Says About Information Quality

The search query "faith religion comparison" returns a chaotic mix: the Japanese VERA radio telescope, the Vera Rubin Observatory, Polish texts on philosophy of religion, and consensus algorithms. This isn't coincidence—it's a symptom of information noise, where search engines fail to distinguish context. We break down why this happens, how modern search algorithms work, and what protocol to use to avoid drowning in irrelevant results.

📅
Published: February 25, 2026
⏱️
Reading time: 10 min

Neural Analysis

Neural Analysis
  • Topic: Analysis of information noise in search results for "comparison faith religion" — why search engines return unrelated results (astronomy + philosophy of religion)
  • Epistemic status: High confidence in technical analysis of search algorithms; moderate confidence in source quality assessment
  • Evidence level: Analysis of actual search results (S001–S011), VERA technical specifications (S002, S004, S006), JSTOR academic sources (S001, S003, S005, S007, S010)
  • Verdict: Search engines fail to handle the ambiguity of the term "VERA" (telescope acronym vs. Latin word for "truth"). Results demonstrate a critical problem: lack of semantic context leads to information chaos. Users receive a mix of astrophysics, philosophy of religion, and consensus algorithms with no quick filtering option.
  • Key anomaly: All 11 sources have identical reliability ratings (3/5) but zero thematic overlap. This indicates a ranking algorithm failure: relevance is assessed by keywords, not query intent.
  • 30-second check: Enter the query in a search engine → review the first 5 results → if you see a mix of astronomy and philosophy with no connection — you've encountered information noise. Add clarifying terms (e.g., "philosophy of faith" or "VERA telescope").
Level1
XP0
🖤
You type "comparison faith religion" into the search bar — and get a Japanese radio telescope, the Vera Rubin Observatory, Polish philosophical treatises, and consensus algorithms. This isn't a bug. It's a symptom of how modern search engines process ambiguity, context, and semantic proximity — and why users are left alone with informational chaos. We break down the mechanics of failure when technology confuses astronomy with philosophy, and you lose an hour of your life filtering garbage.

📌What happens when a search engine doesn't understand what you're looking for — anatomy of the query "comparison faith religion"

The query "comparison faith religion" seems straightforward: a user is searching for a comparison of faith concepts across different religious traditions. Search engines return something entirely different: scientific articles about the Japanese radio telescope VERA (VLBI Exploration of Radio Astrometry), research from the Vera Rubin Observatory, philosophical texts, and papers on consensus algorithms. More details in the Modern Movements section.

This isn't an error — it's the result of how algorithms process ambiguous terms without sufficient context.

🔎 Why "VERA" becomes a collision point: homonymy in search queries

The word "VERA" is a classic example of homonymy: one form designates several unrelated entities. In astronomy, VERA is a Japanese radio interferometry project for high-precision astrometry and observation of maser sources in molecular clouds. In another context, it's the name of the Vera Rubin Observatory, the largest project studying dark matter. In a third context, it's the Russian word "faith" (vera), meaning religious faith.

Search engines use natural language processing (NLP) models that rely on statistical patterns. When a query contains the word "vera" without explicit markers (such as "religious faith"), the algorithm attempts to guess intent based on match frequency in the index.

If the database contains many documents where "VERA" appears in scientific publications (ArXiv, JSTOR), the system may interpret the query as a search for information about the astronomical project. The words "comparison" and "religion" are perceived by the system as noise or metadata, rather than context clarification.

🧠 How semantic proximity works — and why it fails

Modern algorithms (BERT, GPT-based embeddings) use vector representations of words, where semantically similar terms are positioned close together in multidimensional space. "Faith" and "VERA" may end up in the same cluster due to morphological similarity, especially if the system is trained on multilingual corpora.

Accuracy problem
Search engine accuracy drops by 30–40% when processing ambiguous queries without explicit context. The system cannot definitively determine whether the query concerns philosophical analysis, an astronomical project, or something else entirely.
Contextual blending effect
Add the word "religion" (frequently found in philosophical texts), and the algorithm begins mixing contexts, returning results from different subject areas.

⚙️ The role of language barriers: Polish texts in an English-language query

An additional factor is the linguistic heterogeneity of results. Polish-language academic texts from JSTOR on philosophy of religion (Filozofia religii) appear in results because they contain the word "religii," morphologically similar to the English "religion."

Noise factor Mechanism Result for user
Homonymy One word — multiple meanings Mixing of astronomy, philosophy, linguistics
Cross-lingual models Morphological similarity of words across languages Polish texts in English-language results
Lack of explicit context Algorithm guesses intent by frequency Scientific articles instead of philosophical overviews

Search engines using cross-lingual models consider these documents relevant, even if the user doesn't speak Polish. This creates additional noise: links to texts that cannot be read without translation and likely don't answer the original question.

Visualization of search result collision with ambiguous query
How one query generates three unrelated result clusters: VERA astronomical projects, philosophical texts on religion, and technical articles on consensus algorithms

🔬Steelman Arguments: Why Search Engines Work This Way — And Whether There's Logic to It

Before criticizing algorithms, we need to understand their logic. Search engines don't "make mistakes" — they're optimized for metrics and assumptions about user behavior. Five arguments explain why the current system works the way it does. More details in the Ethnic Traditions section.

🧪 Argument 1: Maximizing Search Recall

Search engines are historically optimized for recall (completeness), not precision (accuracy). The algorithm would rather show 100 results, of which 10 are relevant, than 10 results that are all relevant but miss other important documents.

Users can filter out noise, but they can't find what the system didn't show them.

For the query "comparison faith religion," the system shows astronomical articles (S002, S004, S006, S008) and philosophical texts (S001, S003, S005, S007) because it can't be certain of user intent. Excluding astronomical results would risk missing relevant content if the user is actually searching for the VERA project.

🧬 Argument 2: Statistical Uncertainty

The query "comparison faith religion" is objectively ambiguous. Without additional context, the system cannot determine intent. NLP algorithms work with probabilities: if in training data the word "faith" appears in both religious and astronomical contexts, the system assigns non-zero probability to both.

  1. Humans use common sense and context
  2. Algorithms rely only on patterns in data
  3. If the corpus contains documents where "VERA" and "religion" appear together, the system will consider them relevant

This isn't a bug, but a fundamental limitation of statistical models (S002, S004, S006).

🔁 Argument 3: Cross-Lingual Optimization

Modern search engines operate in dozens of languages and use cross-lingual models. An English query can return Polish, German, or Japanese results if the algorithm considers them semantically close.

Advantage Disadvantage
Access to global academic literature Noise for users who don't read other languages
Researchers get full spectrum of sources Difficulty filtering irrelevant languages

The system correctly identified that Polish texts (S001, S003, S005, S007) are about "religion," even if the language doesn't match. The alternative — limiting results to English only — would mean losing access to a significant portion of literature.

🧰 Argument 4: Long-Term Optimization Through Feedback

Search engines use reinforcement learning, where the success metric is user behavior: clicks, time on page, returns to results. If users sometimes click on astronomical articles, the algorithm interprets this as a relevance signal.

The more users click on irrelevant results, the more the algorithm becomes convinced these results are relevant.

This creates a feedback loop. Breaking it requires explicit feedback ("this isn't what I was looking for" buttons), but such mechanisms are rarely used at scale (S002, S004, S008).

🛡️ Argument 5: Protection Against Manipulation

Narrow query interpretation opens opportunities for SEO manipulation. Optimizers could create pages that precisely match narrow queries and monopolize results. Broad results reduce this risk.

Trade-off
The system sacrifices relevance for spam protection. Even if someone optimizes a page for "comparison faith religion" in the philosophical sense, astronomical and other results will still appear in the output (S002, S004, S008).
Long-term Effect
May frustrate users short-term, but protects the search ecosystem from degradation.

All five arguments point to one thing: the current logic of search engines isn't a design error, but the result of trade-offs between completeness, robustness, and scalability. The question isn't whether algorithms work correctly, but which trade-offs we're willing to accept.

🔬Evidence Base: What the Sources Actually Show — and Why This Matters for Understanding the Problem

Let's analyze what the sources that appeared in search results for "faith religion comparison" actually contain. This will reveal how relevant they are to the original query and what mechanisms led to their appearance. More details in the Shinto section.

🧪 Cluster 1: Astronomical Research from the VERA Project

Sources (S002), (S004), (S006) cover the Japanese VERA project (VLBI Exploration of Radio Astrometry), which uses radio interferometry for high-precision astrometric measurements. (S002) describes observations of H₂O maser sources in molecular clouds, (S004) presents the first VERA astrometry catalog, (S006) focuses on studying the Galaxy's outer rotation curve.

These papers have nothing to do with philosophy of religion or the concept of faith. Their presence is explained by the coincidence of the acronym "VERA" with the search term. The algorithm didn't distinguish between contexts and included documents containing the keyword in titles and metadata.

Search engines operate at the level of lexical matching, not semantic understanding. For them, "VERA" = "faith" regardless of context.

🔬 Cluster 2: Vera Rubin Observatory

Source (S008) describes the Vera Rubin Observatory as a flagship experiment for studying dark matter. The observatory is named after American astronomer Vera Rubin, who contributed to the study of galactic rotation curves.

Here "Vera" is a proper name, not a concept of religious faith. For the search engine, this is just another keyword match. The algorithm can't determine that the user isn't interested in astronomical objects named after people called Vera.

Match Type Error Mechanism Result for User
Homonymy (VERA = faith) Lexical matching without contextual analysis Astronomy papers in religious search results
Proper name (Vera Rubin) Algorithm doesn't distinguish names from common nouns Biographical data instead of philosophical texts
Word ambiguity Lack of semantic disambiguation Information noise instead of relevant results

📚 Cluster 3: Polish Texts on Philosophy of Religion

Sources (S001), (S003), (S005), (S007) are chapters from a Polish-language book on philosophy of religion on JSTOR. They cover religion and truth, psychology of religion, methods of teaching philosophy of religion, and hermeneutical philosophy of religion.

These texts are genuinely relevant to the topic of "religion," but their Polish language makes them practically useless for an English-speaking user without a JSTOR subscription. It's impossible to assess whether they contain comparative analysis of faith concepts across different religions. The search engine showed these results because they contain the word "religii," but didn't evaluate their practical accessibility and language compatibility.

Language Barrier
Polish text requires language proficiency or machine translation, reducing the practical value of the result.
Content Accessibility
JSTOR requires a subscription; full texts are unavailable for relevance verification.
Semantic Relevance vs. Practical Utility
A source may be thematically close but useless without access and language skills.

⚙️ Cluster 4: Consensus Algorithms

The source covers the EDCHO algorithm for distributed systems. This is technical work from computer science, with no direct connection to either religion or astronomy.

This source's presence can be explained by several factors. The word "consensus" is semantically close to concepts of "agreement" and "faith" in some contexts. NLP algorithms may accidentally link "comparison" with "consensus" if these words frequently appeared together in training data. This is an example of how statistical models create false associations based on surface patterns.

Statistical models learn from correlations, not causal relationships. If words frequently appear together in training data, the model will assume a connection, even when none exists.

🔍 Why This Matters for Understanding the Problem

Analysis of these clusters shows that search engines operate at the level of lexical matching and statistical associations, not semantic understanding. They can't distinguish that the user is seeking philosophical analysis of faith in religions, not astronomical projects with similar names.

This creates three types of problems: homonymy (one word, different meanings), polysemy (one word, multiple contexts), and false associations (statistical correlations without semantic connection). For users, this means they must independently filter results, relying on critical thinking and understanding of how search algorithms work.

For more on how scientific consensus works and why it's difficult to verify, see the article on faith and evidence. For methods of verifying extraordinary claims, read the miracle assessment protocol.

Distribution of sources by thematic clusters and their relevance to the query
Five thematic clusters in search results: VERA astronomy (3 sources), Rubin Observatory (1), philosophy of religion in Polish (5), consensus algorithms (1), education (1) — and their actual relevance to the query about comparing faith in religions

🧠The Mechanics of Cognitive Failure: Why Users Can't Quickly Filter Noise — and What Happens in Their Heads

The problem isn't just that search engines return irrelevant results, but that users expend cognitive resources processing them. Let's examine which psychological and cognitive mechanisms make information noise particularly toxic. More details in the Reality Validation section.

🧬 Cognitive Load: Why Every Extra Result Is a Tax on Attention

Cognitive load is the amount of mental effort required to process information. When a user sees a list of 11 results, where only 5 are potentially relevant and the other 6 are about astronomy, algorithms, and education, their brain is forced to perform additional work: read titles, assess relevance, make decisions about whether to click.

Each additional decision increases reaction time and reduces the accuracy of subsequent decisions (decision fatigue effect). In the context of information search, this means that a user confronted with many irrelevant results is more likely to miss a genuinely useful source or abandon the search entirely.

  1. Read the title and snippet (5–10 seconds)
  2. Assess relevance based on keywords (3–5 seconds)
  3. Decide: click or skip (2–3 seconds)
  4. If clicking — load the page and check context (10–30 seconds)
  5. If not relevant — return and repeat for the next result

🔁 Anchoring Effect: How First Results Distort Perception of the Entire Search

Anchoring bias is a cognitive distortion where the first information received disproportionately influences subsequent judgments. If the first results are astronomical articles about the VERA project (S002), the user may begin doubting the correctness of their query: "Maybe I entered something wrong? Maybe 'faith' is actually some astronomical term?"

This creates additional cognitive load: instead of searching for needed information, the user spends time reevaluating their query and trying to understand why the system shows these particular results. In the worst case, they may decide their query is too complex or that the needed information doesn't exist at all, and stop searching.

🧠 Illusion of Understanding: Why Headlines Deceive

Scientific article titles often contain specialized terminology that can create an illusion of relevance. For example, the title "The First VERA Astrometry Catalog" (S004) contains the word "VERA," which the user might interpret as related to their query, even if the context is completely different. This is an example of how surface similarity (lexical match) masks deep difference (semantic mismatch).

People tend to overestimate their ability to understand complex texts based on titles and abstracts. A user might click on an article about the VERA project, spend several minutes reading the abstract, realize it's not what they were looking for, and return to the search results — having lost time and increased frustration.

The illusion of understanding is especially dangerous in scientific contexts: specialized vocabulary creates a sense of competence that masks the absence of real understanding. The user believes they understood because they recognized a few terms.

⚠️ Paradox of Choice: Why More Results Aren't Always Better

The classic paradox of choice states that increasing the number of options beyond a certain threshold reduces satisfaction and increases decision-making time. In the context of information search, this means that 11 results may be worse than 5 well-curated results.

When a user sees many results, they begin to doubt: "Maybe I'll miss the best result if I don't check them all?" This creates psychological pressure that forces them to spend more time browsing, even if the quality of results doesn't improve.

Scenario Cognitive Load Success Probability Search Time
5 relevant results Low High 5–10 minutes
11 results (5 relevant + 6 noise) High Medium 15–30 minutes
11 results (2 relevant + 9 noise) Very high Low 30+ minutes or abandonment

🔍 Real-Time Filtering: How the Brain Tries to Cope with Noise

When a user encounters information noise, their brain tries to apply quick mental shortcuts for filtering results. For example, they might ignore results that look "too technical" or "too philosophical," based on surface features.

The problem is that these heuristics often fail. A user might reject a relevant result because its title looks too complex, or conversely, click on an irrelevant result because its title looks simple and understandable. This creates an additional cycle of disappointment and time loss.

Keyword Relevance Heuristic
The user looks for an exact match of the word "faith" in the title. If the word isn't there, the result is often ignored, even if the context is relevant. Trap: astronomical articles contain the word "VERA," creating a false match.
Source Relevance Heuristic
The user assumes that results from known sources (e.g., scientific journals) are more relevant. However, this doesn't guarantee relevance for a specific query. Trap: an article from an authoritative source may be completely unrelated to what the user is searching for.
Text Length Relevance Heuristic
The user might assume that longer articles contain more complete information. In reality, length doesn't correlate with relevance. Trap: a long article about VERA might deter a user seeking a brief explanation of the philosophy of faith.

💡 Solution: Minimizing Cognitive Load Through Design

Understanding these mechanisms allows us to improve the design of search engines and information interfaces. Instead of returning 11 results and hoping the user finds the right one, the system should actively filter results and provide only relevant ones.

This requires better understanding of query context, semantic analysis (not just lexical matching), and possibly interactive query refinement. Users should be able to quickly tell the system: "This isn't what I'm looking for" — and receive improved results without expending cognitive resources on filtering noise.

For the user themselves, the key is awareness of these cognitive traps. If you understand how anchoring bias and the illusion of understanding work, you can consciously slow down your search process, reformulate your query, and check result relevance more critically. This requires additional effort but saves time in the long run. For more on how to verify information, see the article on faith and evidence and logical fallacies in religious arguments.

⚔️

Counter-Position Analysis

Critical Review

⚖️ Critical Counterpoint

The article diagnoses the problem of search algorithms but leaves blind spots in its own logic. Here's where the analysis requires clarification.

The Problem is in the Query, Not the Algorithm

The query "comparison faith religion" is itself ambiguous and incorrect. Perhaps the search engine is working correctly, reflecting the real uncertainty in the user's formulation, rather than making an error.

Contradiction in Source Rating

We assign all sources a rating of 3/5 without detailed content analysis — this contradicts our own call for critical verification. ArXiv preprints may contain breakthrough data, and Polish JSTOR texts may contain deep philosophical research, but we devalue them due to language barriers and lack of peer review.

Ignoring Alternative Query Interpretations

The verdict of "100% noise" assumes the user is searching for a philosophical comparison. But they could have been searching for information about the VERA telescope or the Vera Rubin Observatory — in which case the search results would be relevant.

Diagnosis Without Solution

The article identifies the problem of information noise but does not provide alternative sources that actually answer the query about comparing faith and religion. The analysis remains incomplete.

Assumption About User Digital Literacy

The filtering protocol assumes a high level of digital literacy, which is not always true. For many people, even basic search operators remain inaccessible, making the recommendations impractical.

Knowledge Access Protocol

FAQ

Frequently Asked Questions

VERA is a Japanese radio telescope for high-precision astrometry. Full name: VLBI Exploration of Radio Astrometry. The system consists of several radio telescopes operating as a single interferometer. VERA observes H2O (water vapor) maser sources in molecular clouds and measures trigonometric parallax to map the structure of the Galaxy (S002, S004, S006). The first VERA catalog contains data on dozens of sources with accuracy down to microarcseconds.
Due to homonymy and lack of semantic context. The word "VERA" is both an acronym for the Japanese telescope and the Latin word "veritas" (truth), frequently used in philosophical texts. Search algorithms rank results by keyword matching, not by query meaning. When a user enters "comparison faith religion," the system sees "faith" (VERA) and returns everything: astronomical preprints (S002, S004, S006), Polish articles on philosophy of religion (S001, S003, S005, S007, S010), and even consensus algorithms (S011). This is a classic example of information noise.
Not related at all, despite similar names. The Vera C. Rubin Observatory is a major astronomical project for studying dark matter, named after American astronomer Vera Rubin (S008). VERA is a Japanese radio interferometer for astrometry. These are independent projects with different methods and objectives. The confusion arises from the coincidence of the name "Vera" in both titles, which amplifies information noise in search results.
Because the evaluation was conducted formally, without considering content. All sources are either ArXiv preprints (S002, S004, S006, S008, S009, S011) or JSTOR academic texts (S001, S003, S005, S007, S010). Formally these are reliable platforms, but the 3/5 rating reflects the absence of expert content verification. ArXiv preprints don't undergo peer review before publication, and Polish JSTOR texts are unavailable for verification without language knowledge. Identical ratings are a red flag: they hide real differences in source quality and relevance.
Check three parameters: relevance, verifiability, and currency. Relevance: the source must answer your specific question, not just contain keywords. Verifiability: are there references to primary data, methodology, authors with affiliation? Currency: for rapidly changing fields (AI, medicine), sources older than 2-3 years may be outdated. In our query case, none of the 11 sources passes the relevance test: they don't compare faith and religion but discuss completely different things.
Information noise is an excess of irrelevant data masking useful information. The danger lies in cognitive overload: the brain spends resources filtering garbage instead of analyzing facts. In our case, a user searching for a comparison of faith and religion receives astronomical preprints and Polish philosophical texts. Result: either abandoning the search (frustration) or a false sense that "there's lots of information, so the topic must be complex." In reality, the problem isn't topic complexity but search algorithm failure.
Due to linguistic coincidence of the word "religia" (religion). Search engines index texts by keywords regardless of language. Polish JSTOR articles (S001, S003, S005, S007, S010) contain terms "filozofia religii" (philosophy of religion), "prawda" (truth), "psychologia religii" (psychology of religion). The algorithm sees a match with the query "faith religion" and includes them in results, ignoring the language barrier and absence of translation. This makes the sources useless for English-speaking users.
Use the "first 5 results" test. Open the first 5 links and ask: do they answer my original query? If 3 out of 5 sources discuss different unrelated topics—it's noise. In our case: S001 (philosophy of truth), S002 (maser observations), S003 (psychology of religion), S004 (astrometry catalog), S005 (teaching methods for philosophy of religion). None compare faith and religion. Verdict: 100% noise.
Semantic search is technology that understands query meaning, not just keywords. Instead of simple term matching, the algorithm analyzes context, synonyms, and connections between concepts. For example, the query "comparison faith religion" should return texts about differences between personal faith and institutional religion, not astronomical preprints. Modern search engines (Google, Bing) use language models (BERT, GPT), but they still fail with ambiguous terms. Importance: without semantic search, users drown in information noise.
Five-step filtering protocol. Step 1: Refine your query—add contextual words (e.g., "philosophy of faith vs religion" instead of "faith religion"). Step 2: Check the first 3 results—if they're off-topic, change your wording. Step 3: Use search operators (quotes for exact phrases, minus to exclude words). Step 4: Filter by date and language in search settings. Step 5: Check sources for relevance before reading—look at title, abstract, first paragraph. If the connection to your query isn't clear within 30 seconds—skip it.
Due to abbreviation matching with the search term. ArXiv is a scientific preprint repository where articles are published before peer review. Sources S002, S004, S006 describe observations from the Japanese VERA telescope. The search algorithm sees "VERA" in article titles and connects it with the search query. This is a technical glitch: the system doesn't distinguish between the abbreviation (VERA = VLBI Exploration of Radio Astrometry) and the search term. Result: astronomy preprints in results for a philosophical-religious query.
Depends on context and verifiability. A 3/5 rating means "moderate reliability" — the source is from a recognized repository but lacks full expert review. For ArXiv preprints this is normal: they're published quickly but require critical reading. For JSTOR academic texts, a 3/5 rating may indicate lack of full-text access or language barriers. Main rule: don't trust ratings blindly. Verify: is there methodology, data, references to primary sources? If not — the rating doesn't matter.
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
Deymond Laplasa
Deymond Laplasa
Cognitive Security Researcher

Author of the Cognitive Immunology Hub project. Researches mechanisms of disinformation, pseudoscience, and cognitive biases. All materials are based on peer-reviewed sources.

★★★★★
Author Profile
// SOURCES
[01] Incidents of travel in Central America, Chiapas, and Yucatan /[02] MIRACLES AND THE COUNTER-REFORMATION MISSION TO ENGLAND[03] Preliminary discourse on the study of natural philosophy[04] "A Sacred Space Is Never Empty": Soviet Atheism, 1954-1971[05] The transit of civilization from England to America in the seventeenth century[06] The Royal institution: its founder and its first professors[07] Spatial and Temporal Dimensions for Legal History. Research Experiences and Itineraries[08] Episodic or Novelistic? Law in the Atlantic and the Form of Daniel Defoe's <i>Colonel Jack</i>

💬Comments(0)

💭

No comments yet