Methodology
Full transparency on how this tool works, its limitations, and where human judgment enters the pipeline.
Who is the useful idiot?
You are. So are we. Geopolitics is played by institutional actors — governments, lobbies, military establishments, corporations — who pursue defined interests with or without public support. Your interests may genuinely overlap with theirs, but the question is whether that's because they represent you or because your support is useful to them. It can be both. It is rarely obvious which.
Commentators help us make sense of these conflicts, mostly earnestly. But they are not participants either — they give voice to what portions of the audience already feel. The commentator shapes the narrative, the narrative shapes public opinion, and public opinion provides the democratic legitimacy institutions require. Democracy does not structurally solve this: you get a vote on who sits in the chair, not on what the chair is bolted to.
The useful idiot is not the commentator — they state their positions openly. It is the follower who adopts those positions without examining which institutional interests they advance. This tool maps positions to their logical conclusions and shows which institutions benefit — not to blame the messenger, but to show you the full picture behind the message.
What this tool does NOT do
- Judge commentators as good or bad - they state their positions openly
- Accuse anyone of being a foreign agent - beneficiary mapping shows outcomes, not intentions
- Claim hidden agendas - we only follow stated logic to its conclusions
- Determine which side is correct - the tool surfaces patterns, the reader judges
- Impose a definition of any country's “real” interests - actor interests are defined in the dataset and shown transparently
The pipeline
Phase 1: Position Extraction
A commentator's public statement is ingested. The core position is extracted and mapped to shared premises - claims that multiple commentators can hold independently. Each commentator's specific wording and reasoning for holding a premise is preserved. Human review verifies accuracy.
Phase 2: Implication Chain Derivation
For each premise, the logical chain of implications is derived step by step. Each step is substantiated with factual basis (historical precedent, economic mechanism, political reality). Confidence ratings reflect certainty. Human review checks for logical validity and strawmanning.
Phase 3: Beneficiary Mapping
For each actor in a conflict, the tool assesses whether a position supports or opposes that actor's interests. The mechanism is described, rated by strength (direct, indirect, structural), and tagged by direction (supports or opposes). Human review verifies substantiation.
Phase 4: Premise Scrutiny
Each premise undergoes structured logical analysis: claim type classification, hidden dependency identification, evidence gathering (both supporting and challenging), and logical vulnerability detection. A scrutiny score is computed across three dimensions. Human review ensures neutrality.
Phase 5: Pattern Detection
Clusters of commentators whose positions consistently benefit the same actors are identified across conflicts. Presented as correlation, never causation. Cross-spectrum convergences - where commentators from opposing political backgrounds serve the same actor - are highlighted as analytically significant.
Scrutiny scoring
Every premise receives a scrutiny score (0-100) computed from three equally weighted dimensions. The score reflects how well a claim holds up to logical analysis - not whether it is “true” or “false.”
Evidential Basis (0-100)
How well is the claim supported by verifiable evidence relative to what challenges it? Strong, specific evidence with weak counter-evidence scores high. Speculative claims with strong counter-evidence score low.
Logical Coherence (0-100)
How well-formed is the reasoning? Are the hidden dependencies defensible? Are there serious logical vulnerabilities like circularity, equivocation, or unfalsifiable construction? Clean reasoning with minimal vulnerabilities scores high.
Falsifiability (0-100)
Can the claim be tested or disproven? Empirical claims with clear criteria score high. Value judgments and definitional framings that cannot be empirically challenged score low. Normative claims inherently score lower on this dimension - this is structural, not a judgment on their validity.
Important caveats
- A low score does not mean the premise is wrong - it means the reasoning is weak, the evidence is thin, or the claim is structured to resist disproof.
- A high score does not mean the premise is right - it means the reasoning is sound, the evidence is specific, and the claim can be tested.
- The same standard is applied to every premise regardless of who holds it or which political position it supports.
- Commentator scores are derived from the average scrutiny of their premises. A commentator who builds on well-evidenced, logically coherent claims scores higher than one who builds on unfalsifiable assertions.
Key principles
- Symmetry: Every analytical operation is applied equally to all sides.
- Logic only: We never claim hidden agendas. We only follow stated positions to logical conclusions.
- Source everything: Every position should link to a primary source. Every implication states its logical basis. Positions without sources are labeled as characterizations.
- Confidence transparency: Every derived claim has a confidence rating. Every premise has a scrutiny score.
- Human editorial layer: LLM outputs are drafts. Nothing publishes without human review.
- Methodology transparency: The full analytical pipeline is public and open to critique.
Sources and characterizations
The current dataset consists of synthesized characterizations of commentators' publicly known stances. These are based on their body of public statements, not direct quotes from specific sources. The position statements, premise analyses, and logical structures are editorially constructed to reflect each commentator's known views as accurately as possible.
Going forward, new positions will be ingested from primary sources (articles, interviews, speeches, testimony) and linked directly. Existing characterizations will be replaced with sourced material as it becomes available.
If you believe a characterization is inaccurate, we welcome corrections. The tool is only as good as the accuracy of the positions it analyzes.
Use of AI
This tool uses large language models for synthesizing commentator positions, deriving implication chains, mapping beneficiaries, analyzing premises, and computing scrutiny scores. Every LLM output passes through human editorial review before publication. The LLM is a drafting tool, not an oracle.
Known limitations
- Scrutiny scores are analytical assessments, not objective measurements - reasonable analysts could score differently.
- The tool currently covers one conflict with illustrative data. Analysis will deepen as more conflicts and real source material are added.
- LLM-derived analysis inherits the model's training biases. Human editorial review mitigates but cannot eliminate this.
- Pattern detection shows correlation, not causation. Commentators whose positions benefit an actor may have no connection to that actor.