Skip to main content

How it works

How TheoremPath organizes learning, claims, and evidence.

Last reviewed: April 30, 2026

TheoremPath does not treat a topic page as one large article. It breaks the page into reviewable objects: prerequisite edges, claim records, source roles, diagnostic items, evidence notes, and practice work. That makes the site easier to audit and easier to improve.

See the public evidence page for representative Lean wrappers and display rules, or the Lean verification dashboard for the full manifest.

Operating rule

The site is useful only if the same underlying objects serve the article, the diagnostic, the source trail, and the proof record. Prose explains the object; it does not replace it.

Dependencies
What must be known first
Claims
Exact statement, assumptions, failure modes
Sources
What supports the claim, and how strong it is
Diagnostics
Which skill each question tests
Evidence
Source checks, Lean scope, review state

Page structure

A topic page starts with a concrete problem and a small set of prerequisites. It then moves through definitions, examples, formal statements, common mistakes, exercises, and links to related topics.

Diagrams, demos, and project work are supporting material. They are included when they clarify a concept, not because every topic needs an interactive surface.

System loop

From a learning goal to the next useful prerequisite.

This is the core mechanism behind diagnostics and future public evidence displays.

  1. 01

    Target

    Start with a theorem or paper goal

    Name the thing the learner wants to understand, then expose the prerequisites it depends on.

  2. 02

    Diagnose

    Map answers to skills

    Q-matrix links connect each diagnostic item to the definitions, claims, and techniques it tests.

  3. 03

    Ground

    Attach claim evidence

    Claim records keep statements, assumptions, failure modes, and source roles together.

  4. 04

    Verify

    Record exact proof scope

    Lean entries name the theorem declaration and the scope they actually verify.

  5. 05

    Route

    Recommend the next prerequisite

    The study path uses graph edges and evidence state, not page-level guesses.

Evidence records

These records make the system inspectable. They say what is supported, what is formally checked, and what should be rechecked.

Claim record

Statement, assumptions, failure modes, changed fields, dependent claims.

Makes corrections claim-level instead of trusting or rewriting an entire page.

Source support

Source role, locator, authority tier, support status.

Separates canonical support from context, examples, and informal references.

Lean mapping

Declaration name, import path, proof scope, check status.

Counts only exact-scope wrappers as Lean evidence for a governed claim.

Diagnostic item

Question ID, mapped skills, answer evidence, difficulty signal.

Lets the system explain which prerequisite is blocking a learner.

Review queue

Reverse dependencies from changed sources, claims, or proofs.

Shows what needs revalidation before public trust indicators change.

Current verification boundary

A missed answer

Maps to the prerequisite skill it tests, then points to the shortest useful review path.

A changed claim

Keeps its own assumptions, sources, version history, and re-review state.

A Lean proof

Counts only for the exact theorem scope recorded in the Lean mapping.

Prerequisite graph

The graph stores typed nodes: topics, definitions, statements, assumptions, equations, skills, exercises, source locations, and content gaps.

Edges are labeled by relationship: prerequisite, application, proof dependency, source grounding, misconception repair, or diagnostic result.

Diagnostics and learner state

Diagnostics map answers to prerequisite areas. A missed question can point to the topic, definition, or skill that explains the miss.

Where learner state is active, the first model uses Performance Factor Analysis over graph edges. New recommendation models are compared against baselines before they become product claims.

Source grounding

A source has a role. It may provide intuition, a formal statement, proof machinery, assumptions, exercises, implementation detail, historical context, or production practice.

The source map records what each reference is used for and where it connects to the topic graph.

Source authority is tracked separately from source support. Canonical textbooks, peer-reviewed papers, arXiv preprints, authoritative documentation, and informal sources should not carry the same review weight.

The public source map is kept on the References page.

Corrections

Corrections target the smallest broken object: sentence, equation, edge, exercise, source note, or diagram.

Public pages keep claim text, source notes, and graph edges close together so mistakes can be traced and fixed without rewriting the whole page.

Claim trust

The trust unit is a claim, not a whole page. A theorem, definition, empirical result, or algorithmic statement can have its own assumptions, failure modes, source roles, and review state.

Claim records are versioned at the claim level. When a source, assumption, Lean wrapper, or statement changes, dependent claims can be queued for re-review without rewriting an entire topic page.

Source-checked means a reviewer attached source-location support. A formal Lean statement means a scoped declaration exists. Lean verified is stronger: it requires a generated Lean mapping from CI with no sorry or admit.

Some governed claims have source-check records or Lean-verified wrappers, but most of the corpus is still reviewed by source trails, assumptions, and correction workflow. TheoremPath is not Lean verified as a site. A single proof artifact does not give a topic page a public badge.

Adaptive learning boundary

The current adaptive layer uses question evidence, a Q-matrix, graph prerequisites, and interpretable mastery updates. Learner estimates should name the amount of evidence behind them.

The system does not present a single quiz run as a fixed ability score. Recommendation policies are treated as experiments: shown, clicked, completed, and ignored events matter.

Practice

Practice pages collect worked derivations, implementations, simulations, ablations, plots, paper maps, and short reports. See Practice and Projects.