Sequential Inference
E-Values and Anytime-Valid Inference
A framework for hypothesis testing whose validity survives optional stopping, optional continuation, and data-dependent peeking. E-values are nonnegative random variables with expectation at most one under the null; Ville's inequality makes the guarantee anytime-valid.
Prerequisites
Why This Matters
Every classical hypothesis test fixes the sample size in advance. The -value guarantee, , breaks the moment you peek at the data and stop early when the result looks good. This is the reason online A/B tests, adaptive clinical trials, and rolling LLM evaluations produce elevated false-positive rates in practice. The analyst follows the mathematics of a fixed- test but behaves as if they were running a sequential test, and the gap is filled with Type I errors.
E-values and anytime-valid inference close the gap. An e-value is a nonnegative statistic whose expectation under the null is at most one. By Markov's inequality, acts like a -value in the sense that . The key constructive fact is that a sequentially-built product of conditional e-values (equivalently, the running product of likelihood ratios under filtration ) is a nonnegative martingale or supermartingale under the null, hence an e-process. This gives a test that is valid at every stopping time simultaneously, not just at the sample size you had originally in mind. Peeking becomes a feature, not a failure mode. Arbitrary products of marginal (unconditional) e-values across experiments are not in general e-values, because without independence; combination of evidence across experiments without conditional structure relies on arithmetic averaging or weighted means, which preserve the e-value property.
The regulatory case is live, though it should not be overstated. The FDA's January 2026 draft guidance on Bayesian adaptive medical-device trials is not an e-value guidance document; it concerns prior elicitation, simulation-calibrated operating characteristics, and decision rules for adaptive Bayesian designs. But it is a useful regulatory signal: adaptive evidence accumulation, simulation-based calibration, and transparent stopping rules are becoming central enough that the FDA is writing formal guidance on adjacent ideas. Anytime-valid methods sit naturally in that family. The tech case has been live for a decade and is finally being taught.
How e-values differ from p-values: structural comparison
| Property | Classical -value | E-value |
|---|---|---|
| Definition | such that | with |
| Validity guarantee | At the fixed sample size committed in advance | At every stopping time simultaneously (Ville) |
| Behavior under peeking | Type I error inflates toward 1 with continuous monitoring | Type I error stays at regardless of monitoring rule |
| Multiplication across studies | Not valid (Fisher combination requires independence + scaling) | Conditional products are e-values; arithmetic / weighted means combine across studies |
| Calibration gap | None (sharp at fixed ) | is a (conservative) p-value via Markov |
| Behavior under composite null | Standard via uniform-validity over the null set | Universal inference (Wasserman et al. 2020) gives a generic construction |
| Multiple testing | Bonferroni / BH on p-values | e-BH (Wang-Ramdas 2022) provides FDR control on e-values |
| Confidence interval | Inverted at fixed (CI is for one ) | Inverted into a confidence sequence (CI valid at every ) |
| Software | Standard everywhere | rivertest, safetestR, confseq, ridgesafe (R / Python) |
The trade-off is sharp: e-values give up the worst-case sharpness of fixed-time p-values in exchange for anytime validity. For a user who is going to peek (online A/B testing, sequential clinical trials, rolling LLM evaluation), the sharpness loss is more than recovered because the alternative — using p-values under peeking — has no validity guarantee at all.
An e-value is not a sequential p-value
A p-value is a probability statement: at the fixed sample size you committed to in advance. An e-value is a non-negative evidence measure with . The expectation bound, not a probability bound, is what makes multiplication of e-values under conditional validity a clean operation, and what lets act as a calibrated p-value via Markov. Treating an e-value as "the new p-value" loses the structural difference: e-values combine across studies and dependent observations in ways p-values cannot, and they trade the worst-case sharpness of a fixed-time p-value for sequential validity.
What anytime-valid inference does not solve
Anytime-valid methods control Type I error under optional stopping provided the evidence process is valid under the planned filtration. They do not automatically rescue:
- model-selection bias (you still cannot search across models and report the winner without accounting for the search);
- adaptive-endpoint hacking (changing what you measure mid-study breaks the filtration);
- post-hoc subgroup discovery (the same problem in disguise);
- invalid nuisance estimation in semiparametric settings.
Each of these requires the selection or adaptation mechanism to be included in the validity argument. "Anytime-valid" is not a license to do anything adaptively; it is a guarantee about a specific kind of adaptation (when to stop) under a specified filtration.
Formal Setup
Let be a null hypothesis specifying a set of probability measures on a data stream Fix a significance level .
E-Value
A nonnegative random variable is an e-value for if for every . By Markov,
The test rejects when .
E-Process
A sequence adapted to a filtration is an e-process for if for every stopping time (including ) and every ,
Equivalently, is upper-bounded by a nonnegative supermartingale under every . When equality holds, is a test martingale.
Ville's Inequality
Ville's Inequality
Statement
For any ,
Consequently, the test "reject when at any " has Type I error at most uniformly over all stopping times.
Intuition
A nonnegative supermartingale with expected starting value at most one cannot consistently grow large, because the expected value is bounded forever. Markov's inequality applied at a maximum (rather than at a fixed time) gives a maximal inequality. This single bound handles every continuation strategy: no matter how the analyst decides when to stop, the supremum control holds.
Proof Sketch
The proof is a clean application of optional stopping to a hitting time, plus monotone convergence to remove the time horizon.
Step 1: define the hitting time. Let , with the convention if the supremum never reaches . The event of interest is .
Step 2: stop the supermartingale. For any deterministic horizon , the stopped process is itself a non-negative supermartingale. Doob's optional-stopping theorem applies to bounded stopping times for a supermartingale, and is bounded by , so
Step 3: lower-bound the expectation by the rejection probability. On the event we have by definition of (right-continuity in continuous time; discrete time is immediate). Since everywhere,
Step 4: combine. Steps 2 and 3 together give
Step 5: send . The events are nested and increase to as . Continuity of probability from below gives
The original formulation is Ville (1939), Étude critique de la notion de collectif. The modern textbook treatment is Williams (1991), Probability with Martingales, §A14, or Doob (1953), Stochastic Processes, Ch. VII §3. The inequality is sometimes called Doob's maximal inequality for non-negative supermartingales; the e-value literature uses "Ville's inequality" because Ville's 1939 formulation explicitly motivated the gambling-against-the-null interpretation and because the bound in the form is exactly the anytime-valid Type I error guarantee.
This proof is formalizable in Lean / Mathlib today; the relevant
library entries are MeasureTheory.Martingale.OptionalStopping and
the supermartingale-stopping lemmas under MeasureTheory.Filtration.
A Lean wrapper for this argument is a natural follow-up to the
existing verification/lean/TheoremPath/Probability/ modules.
Why It Matters
This is the mathematical engine underneath every anytime-valid procedure. It converts the fixed-sample Markov inequality into a sample-size-free maximal bound, at the cost of requiring the statistic to be a supermartingale under the null. Building such a supermartingale is the constructive content of the theory.
Failure Mode
The supermartingale property is with respect to a specific filtration; test statistics that peek at future data, or that are not adapted, violate it silently. Composite nulls require the supermartingale property to hold simultaneously under every null distribution, which is where universal inference and reverse information projection (RIPr) come in.
Construction Methods
Likelihood ratios. When is simple and is an alternative, the likelihood ratio is a -test martingale. This recovers Wald's sequential probability ratio test as a special case.
Universal inference (Wasserman, Ramdas, Balakrishnan 2020). For any composite null , split the data into two halves. Fit a maximum likelihood estimator on the first half and use
evaluated on the second half. This is an e-value with no regularity conditions, trading efficiency for universal validity.
Reverse information projection (RIPr) (Grünwald, de Heide, Koolen 2024). Take the reverse KL projection of the alternative onto the null, giving a minimax-optimal e-value within a specified class.
Method of mixtures and betting martingales (Shafer 2021). Interpret the e-process as the wealth of a player in a sequential betting game against the null, with the null's best strategy being to offer fair odds. The analyst bets; wealth accumulates under misspecification.
p-to-e and e-to-p Calibration
Vovk and Wang (2021) characterize the admissible calibrators converting between the two. A function is a valid p-to-e calibrator if in expectation for every super-uniform under , equivalently if is non-increasing with .
- p-to-e: Two valid examples are (which has ) and the family for . The proposal does not integrate to a finite value over because diverges, so it is not a valid calibrator without additional truncation or restriction.
- e-to-p: is a valid p-value from any valid e-value (Markov's inequality applied to ).
The conversions are lossy in an expected-value sense; the standalone strength of e-values is for sequential combination via running products of conditional e-values, and for arithmetic averaging of e-values across arbitrarily dependent experiments. The analogous operations on p-values do not preserve the validity guarantee.
Confidence Sequences
Inverting an e-process for over all gives a confidence sequence: a random sequence of sets such that . The guarantee is uniform in time. Howard, Ramdas, McAuliffe, Sekhon (2021) give time-uniform nonparametric confidence sequences for bounded means, quantiles, and Bernoulli rates.
Multiple Testing: The e-BH Procedure
e-BH Controls FDR Under Arbitrary Dependence
Statement
For e-values corresponding to null hypotheses, the e-BH procedure rejects the largest e-values where is the largest index such that . This procedure controls the false discovery rate at , under arbitrary dependence among the e-values.
Intuition
The BH procedure for p-values requires positive regression dependence (PRDS) to control FDR. e-values replace that assumption with a structural property: a weighted average of e-values is an e-value, regardless of dependence. The FDR bound follows from a single application of Markov to the ratio of false discoveries to total discoveries.
Why It Matters
In LLM evaluation and multi-arm A/B testing, dependence among hypotheses is the rule, not the exception, and PRDS is rarely verifiable. e-BH gives FDR control with no dependence assumption, at the cost of slightly wider rejection regions than BH under favourable dependence.
Applications
Adaptive clinical trials. The R package evalinger and its successors
implement e-process monitoring with futility stops and platform-trial
multiplicity control. Regulatory adoption is the live frontier (FDA draft
guidance, January 2026).
Online A/B testing. Confidence sequences replace fixed- tests at Netflix, Adobe, Optimizely, and similar shops. Any decision made "when the result looks stable" is an implicit sequential test, and a confidence sequence is the honest version.
LLM evaluation. Leaderboard comparisons are sequential by design: new models arrive, benchmarks are rerun, the top- ordering shifts. Selective inference on the final ranking requires e-processes, not -values.
Relationship to Bayes and Classical Sequential Analysis
Bayes factors are e-values from the null's perspective: the expectation of the Bayes factor under the null is one. Wald's sequential probability ratio test is a likelihood-ratio test martingale. The modern e-value framework unifies these threads and adds explicit universal-inference and RIPr constructions for composite nulls that the classical theories handle only clumsily.
Exercises
Problem
A classical -value satisfies . An analyst peeks every and stops at the first . Under , roughly what is the Type I error rate, assuming the -values at different stopping points are independent?
Problem
Build a test martingale for against an unspecified alternative on an i.i.d.\ Bernoulli stream, using a Beta mixture prior on the alternative parameter. Explain why the mixture construction gives a universally valid e-process.
Problem
State the safe-testing characterization of Grünwald, de Heide, Koolen (2024): under what conditions does the reverse information projection give a GROW-optimal e-value, and what goes wrong for composite nulls where RIPr does not exist in closed form?
Open Problems and Frontier
Tight e-values for composite nulls without closed-form RIPr is a central open line, with partial progress via numerical RIPr and GROW approximations.
Time-uniform CLT (Waudby-Smith 2024-26) extends e-values into settings that previously required fixed- asymptotics, including conditional independence testing without Model-X, observational causal inference, and semiparametric inference more broadly.
Anytime-valid conformal prediction via e-value-based prediction sets is an active frontier; the split-conformal guarantees extend to sequential prediction when the calibration scores generate a test martingale.
Regulatory adoption. The FDA January 2026 draft Bayesian guidance is the policy opening. e-value methods map cleanly onto the guidance's flexibility for non-traditional success criteria. Practical adoption in platform trials, futility monitoring, and adaptive design is the next 2-3 years.
Hypothesis testing with e-values (Ramdas, Wang 2025, Waterloo book draft) is the first textbook treatment; field-wide adoption will track the book's completion.
References
Foundational:
- Ville, Étude Critique de la Notion de Collectif (Gauthier-Villars, 1939). The original maximal inequality.
- Shafer, Vovk, Probability and Finance: It's Only a Game! (Wiley, 2001). Chapters 3-5.
Modern e-value theory:
- Ramdas, Grünwald, Vovk, Shafer, "Game-Theoretic Statistics and Safe Anytime-Valid Inference." Statistical Science 38(4) (2023), 576-601. The canonical survey.
- Vovk, Wang, "E-Values: Calibration, Combination and Applications." Annals of Statistics 49(3) (2021), 1736-1754.
- Grünwald, de Heide, Koolen, "Safe Testing." Journal of the Royal Statistical Society B 86(4) (2024), 1091-1128.
- Shafer, "Testing by Betting: A Strategy for Statistical and Scientific Communication." Journal of the Royal Statistical Society A 184(2) (2021), 407-431.
Confidence sequences and time-uniform bounds:
- Howard, Ramdas, McAuliffe, Sekhon, "Time-Uniform, Nonparametric, Nonasymptotic Confidence Sequences." Annals of Statistics 49(2) (2021), 1055-1080.
- Waudby-Smith, Ramdas, "Estimating Means of Bounded Random Variables by Betting." Journal of the Royal Statistical Society B 86(1) (2024), 1-27.
Multiple testing:
- Wang, Ramdas, "False Discovery Rate Control with E-Values." Journal of the Royal Statistical Society B 84(3) (2022), 822-852.
- Wasserman, Ramdas, Balakrishnan, "Universal Inference." Proceedings of the National Academy of Sciences 117(29) (2020), 16880-16890.
Textbook (in preparation):
- Ramdas, Wang, Hypothesis Testing with E-Values. Book draft, University of Waterloo, 2025.
Next Topics
- Split conformal prediction: the companion uncertainty-quantification framework, with anytime-valid extensions an open line.
- Hypothesis testing for ML: the fixed-sample baseline e-values replace.
- Martingale theory: the mathematical substrate; Ville's inequality is a maximal inequality for nonnegative supermartingales.
Last reviewed: April 26, 2026
Canonical graph
Required before and derived from this topic
These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.
Required prerequisites
7- Maximum Likelihood Estimation: Theory, Information Identity, and Asymptotic Efficiencylayer 0B · tier 1
- Measure-Theoretic Probabilitylayer 0B · tier 1
- Tabular Foundation Models as Bayesian Inference Engineslayer 3 · tier 1
- Weighted Conformal Prediction Under Covariate Shiftlayer 3 · tier 1
- Martingale Theorylayer 0B · tier 2
Derived topics
0No published topic currently declares this as a prerequisite.