Methodology

Full transparency on how this tool works, its limitations, and where human judgment enters the pipeline.

What this tool does

Useful Idiot is a consistency database. It accumulates commentators' publicly stated positions across multiple conflicts and topics, derives logical implication chains from stated premises, cross-references positions for internal coherence, and surfaces structural patterns between commentator clusters and actor interests.

What this tool does NOT do

  • Fact-check claims (we don't verify if claims are true)
  • Claim hidden agendas (we only follow stated logic)
  • Take partisan positions (analysis is applied symmetrically)
  • Operate in real-time (all outputs are editorially reviewed)

The pipeline

Phase 1: Position Extraction

A commentator's public statement is ingested. The core position is extracted and mapped to shared premises - claims that multiple commentators can hold independently. Each commentator's specific wording and reasoning for holding a premise is preserved. Human review verifies accuracy.

Phase 2: Implication Chain Derivation

For each premise, the logical chain of implications is derived step by step. Each step is substantiated with factual basis (historical precedent, economic mechanism, political reality). Confidence ratings reflect certainty. Human review checks for logical validity and strawmanning.

Phase 3: Beneficiary Mapping

For each actor in a conflict, the tool assesses whether a position supports or opposes that actor's interests. The mechanism is described, rated by strength (direct, indirect, structural), and tagged by direction (supports or opposes). Human review verifies substantiation.

Phase 4: Premise Scrutiny

Each premise undergoes structured logical analysis: claim type classification, hidden dependency identification, evidence gathering (both supporting and challenging), and logical vulnerability detection. A scrutiny score is computed across three dimensions. Human review ensures neutrality.

Phase 5: Pattern Detection

Clusters of commentators whose positions consistently benefit the same actors are identified across conflicts. Presented as correlation, never causation. Cross-spectrum convergences - where commentators from opposing political backgrounds serve the same actor - are highlighted as analytically significant.

Scrutiny scoring

Every premise receives a scrutiny score (0-100) computed from three equally weighted dimensions. The score reflects how well a claim holds up to logical analysis - not whether it is “true” or “false.”

Evidential Basis (0-100)

How well is the claim supported by verifiable evidence relative to what challenges it? Strong, specific evidence with weak counter-evidence scores high. Speculative claims with strong counter-evidence score low.

Logical Coherence (0-100)

How well-formed is the reasoning? Are the hidden dependencies defensible? Are there serious logical vulnerabilities like circularity, equivocation, or unfalsifiable construction? Clean reasoning with minimal vulnerabilities scores high.

Falsifiability (0-100)

Can the claim be tested or disproven? Empirical claims with clear criteria score high. Value judgments and definitional framings that cannot be empirically challenged score low. Normative claims inherently score lower on this dimension - this is structural, not a judgment on their validity.

Important caveats

  • A low score does not mean the premise is wrong - it means the reasoning is weak, the evidence is thin, or the claim is structured to resist disproof.
  • A high score does not mean the premise is right - it means the reasoning is sound, the evidence is specific, and the claim can be tested.
  • The same standard is applied to every premise regardless of who holds it or which political position it supports.
  • Commentator scores are derived from the average scrutiny of their premises. A commentator who builds on well-evidenced, logically coherent claims scores higher than one who builds on unfalsifiable assertions.

Key principles

  • Symmetry: Every analytical operation is applied equally to all sides.
  • Logic only: We never claim hidden agendas. We only follow stated positions to logical conclusions.
  • Source everything: Every position links to a primary source. Every implication states its logical basis.
  • Confidence transparency: Every derived claim has a confidence rating. Every premise has a scrutiny score.
  • Human editorial layer: LLM outputs are drafts. Nothing publishes without human review.
  • Methodology transparency: The full analytical pipeline is public and open to critique.

Use of AI

This tool uses large language models (Claude by Anthropic) for extracting positions, deriving implication chains, mapping beneficiaries, analyzing premises, and computing scrutiny scores. Every LLM output passes through human editorial review before publication. The LLM is a drafting tool, not an oracle.

Known limitations

  • Scrutiny scores are analytical assessments, not objective measurements - reasonable analysts could score differently.
  • The tool currently covers one conflict with illustrative data. Analysis will deepen as more conflicts and real source material are added.
  • LLM-derived analysis inherits the model's training biases. Human editorial review mitigates but cannot eliminate this.
  • Pattern detection shows correlation, not causation. Commentators whose positions benefit an actor may have no connection to that actor.