Predictive Uncertainty
Weighted Conformal Prediction Under Covariate Shift
The extension of split conformal prediction that restores finite-sample coverage when the test distribution differs from the training distribution, at the cost of knowing or estimating the likelihood ratio between them.
Prerequisites
Why This Matters
Split conformal prediction rests on exchangeability. Exchangeability fails the moment the test distribution differs from the training distribution, which happens automatically when models are trained six months before deployment or evaluated on a different population than they were fitted on. Under covariate shift the nominal coverage silently degrades, often substantially, and the usual diagnostic (hold out a clean test set) is unavailable because the clean test set is itself drawn from the training distribution.
Weighted conformal prediction recovers validity by reweighting the calibration points using the likelihood ratio between the two distributions. The construction is due to Tibshirani, Barber, Candès, Ramdas (2019) and is the single most-cited extension of split conformal. The same machinery turns out to give prediction intervals for individual treatment effects, connecting conformal prediction to the causal-inference literature through propensity scores.
Which Conformal Variant Fits Which Setting
Conformal prediction is not a single recipe; it is a family of recipes keyed to which assumption you can defend. This page is the covariate shift branch. The neighbours and what each can and cannot do:
| Setting | What changes between train and test | Which method gives valid coverage | What you must know |
|---|---|---|---|
| Exchangeable / IID | Nothing (the joint is fixed) | Split conformal | Nothing beyond the data itself |
| Covariate shift | Feature marginal , conditional unchanged | Weighted conformal (this page) | The likelihood ratio , exactly or via density-ratio estimation |
| Label shift | , class-conditional unchanged | Label-shifted conformal (Podkopaev–Ramdas 2021) | Class proportion shift estimated on unlabeled test data |
| Streaming / time series | Distribution drifts over time | Adaptive Conformal Inference (ACI; Gibbs–Candès 2021) | A target miscoverage rate; the algorithm tracks the empirical rate online |
| Anytime-valid | Stopping time chosen after seeing the data | E-values and anytime-valid inference | A scoring rule that produces a non-negative martingale |
| Adversarial drift / concept drift | Both and change, possibly adversarially | No distribution-free coverage guarantee in full generality | Use ACI + a domain-specific shift detector, or accept that intervals lose meaning |
The ordering is roughly increasing assumption strength on the operator side: split conformal asks for nothing, weighted conformal asks for the likelihood ratio, label-shifted conformal asks for class proportions, ACI asks for an online learning rate, anytime-valid asks for a martingale construction. If the assumption you can defend is weaker than the assumption your method needs, the coverage guarantee silently fails and the dashboard does not warn you.
Formal Setup
Training data are drawn from a joint distribution with feature marginal . The test point has feature marginal , possibly different from , but the conditional law is assumed the same across both distributions. This is the covariate shift setting; label shift and concept drift are handled separately and are strictly harder.
Define the likelihood ratio
the Radon-Nikodym derivative of with respect to . We assume so the ratio exists; where this fails the problem is ill-posed and no distribution-free correction is possible.
Why Split Conformal Fails
Under exchangeability the test score's rank among the calibration scores is uniform on . Under covariate shift this uniformity breaks: test points that are more likely under than under are overrepresented at the test side of the score distribution, so the test score tends to fall at higher ranks than the calibration scores. Split conformal, which assumes uniform rank, undercovers on the tails where the shift is largest.
Weighted Exchangeability
Weighted Exchangeability
Random variables are weighted exchangeable with weight function if the joint density factors as
for some base density and normalizing constant . Equivalently, the first points are i.i.d.\ from and the last point is drawn from with density .
The key observation is that weighted exchangeability preserves a weighted version of rank uniformity, which is exactly what we need for a coverage argument.
The Weighted Quantile Construction
Compute calibration scores as before. Define normalized weights on the calibration and test points:
The test-point weight appearing in the denominator is the subtle point most implementations get wrong. The prediction set is
where is the weighted -quantile of the distribution that puts mass on score and mass on (so the test point always has a chance to fail).
Main Theorem
Weighted Conformal Coverage
Statement
Let be the weighted split conformal prediction set constructed above. Then under the joint law where calibration points are drawn from and the test point from ,
Intuition
Reweighting by converts the non-exchangeable setup into a weighted-exchangeable one. The weighted-exchangeable rank of the test score is uniform on a probability-weighted index set, and the weighted quantile catches it at the usual level. The proof is a reduction to ordinary exchangeability on an enlarged probability space.
Proof Sketch
The argument from Tibshirani, Barber, Candès, Ramdas (2019) reduces the covariate-shift coverage problem to a weighted analogue of the ordinary-exchangeability rank argument used in split conformal.
Step 1: write the joint density. The calibration data is i.i.d. from (the training distribution); the test point is drawn from (the deployment distribution), with the same conditional as . Under the assumption with likelihood ratio , the joint density of relative to the all-from- measure is
That is: under the mixed law, the joint distribution of differs from the all-from- distribution by exactly the factor applied to the test point.
Step 2: weighted exchangeability of the augmented sample. For any permutation of , the -permuted joint distribution puts the factor on position instead of position . By the factorization above, conditional on the unordered multiset being any specific collection , the conditional probability that position holds the test-distribution point is proportional to the weight at the -coordinate of position :
This is the weighted exchangeability property: the conditional distribution of which index is the test point is determined by the normalized weights, not by the ordering of the augmented sample.
Step 3: apply to the rank of the test score. Define for . Conditional on the multiset , the rank of (call it ) is determined by which index in the augmented sample is the test index. By Step 2, for each
where denotes the -coordinate of the -th-smallest score. The rank is therefore "weighted-uniform" over with weights proportional to the likelihood ratios.
Step 4: weighted-quantile cutoff catches the test rank. The weighted conformal prediction set is built so that
with the atom carrying the test-point's own weight . Conditional on the unordered multiset and on (which determines the weights), the probability that falls at or below is the cumulative weighted probability up to , which by construction is at least .
Marginalizing over the multiset and over :
The full proof, including the careful measure-theoretic treatment of conditioning on the unordered multiset and the handling of ties, is in Tibshirani, Barber, Candès, Ramdas (2019), Conformal Prediction Under Covariate Shift, NeurIPS 2019, Section 3 and Lemma 1.
The argument is a clean analogue of the rank-uniformity argument that drives split conformal: in the unweighted case, the conditional distribution of the test rank is uniform on ; in the weighted case, it is the obvious generalization, with probabilities proportional to the likelihood-ratio weights. The atom in the weighted quantile is the analogue of the atom in the split-conformal quantile: it makes the construction graceful when the desired weighted quantile lies beyond the largest observed score.
The proof is formalizable in Lean / Mathlib in principle (the
required machinery is permutation symmetry + conditional
probability + supremum-of-events arguments, all in scope for
MeasureTheory); the formalization would be a substantial
project comparable to Mathlib's existing entries on the
exchangeability of i.i.d. sequences. No such wrapper exists at
the time of writing.
Why It Matters
The guarantee is finite-sample, distribution-free over the conditional , and requires no assumption on the function class of the predictor or the dimension of the features. The only assumption beyond split conformal is access to the exact likelihood ratio . This is plausible in some causal settings (randomized assignment with a known propensity model) and in some carefully-controlled deployment settings; it is essentially never literally true in observational covariate-shift applications, where must be estimated.
Failure Mode
If is unknown and replaced by an estimate , the exact finite-sample guarantee no longer holds. Coverage bias is first-order in for a naive plug-in; the actual coverage now depends on density-ratio estimation error, sample-splitting design, and any robustification scheme. Doubly robust variants (next section) reduce the bias to a product of two estimation errors, but the resulting guarantee becomes asymptotic / approximate, not the exact finite-sample bound stated above.
Doubly Robust Weighted Conformal
The finite-sample guarantee in the main theorem assumes is known exactly. Once must be estimated, the distribution-free finite-sample guarantee no longer holds at face value: coverage depends on density-ratio estimation error, sample-splitting design, and which robustification scheme you use. A plug-in procedure inherits first-order bias from the weight estimator.
Treating covariate shift as a missing-data problem gives a cleaner path when is unknown: fit a quantile regression for on the training distribution, fit a weight model , and combine them in an AIPW-style construction of the conformal score. Coverage bias then becomes the product of the two estimation errors, analogous to the double-robustness property in double/debiased machine learning — but the resulting guarantee is asymptotic / approximate, not the same exact finite-sample guarantee given for known . Yang, Kuchibhotla, and Tchetgen Tchetgen (JRSSB 2024) develop this construction using efficient-influence-function machinery; the missing-data framing makes the double robustness feel inevitable rather than a trick.
Connection to Causal Inference
Individual treatment effect (ITE) prediction is covariate shift in disguise. Under unconfoundedness, the conditional distribution of in the treated subpopulation equals the conditional distribution under the counterfactual "treat everyone" population, with covariate ratio proportional to (the inverse propensity); symmetrically the control conditional uses . Weighted conformal prediction with the corresponding weight produces valid prediction intervals for and separately; ITE intervals combine the two and inherit the finite-sample guarantee cleanly under randomized assignment / perfect compliance. In purely observational settings the same construction works under strong ignorability, but the guarantee usually becomes approximate or doubly robust depending on how the propensity is estimated; Lei and Candès (2021) make this randomized-versus-observational distinction explicitly. This links the conformal literature directly to semiparametric causal inference.
Sensitivity Analysis Under Unmeasured Confounding
Unconfoundedness is a strong assumption. Rosenbaum's marginal sensitivity model parameterizes hidden confounding by , interpreted as the worst-case ratio between the true and observed propensity odds. Under a -bounded violation, Jin, Ren, Candès (2023) give robust weighted conformal intervals with coverage guarantees that degrade gracefully with . This provides an honest answer to "what if unconfoundedness fails by a factor of ?"
Beyond Covariate Shift
The nonexchangeable framework of Barber, Candès, Ramdas, Tibshirani (2023) generalizes weighted conformal to arbitrary user-chosen weights with coverage guarantees stated as a function of the total variation distance between the assumed and true weighting. This opens the door to time series, spatial data, and label shift, each at the cost of a coverage gap that reflects the modelling mismatch.
Exercises
Problem
Three calibration points have features , scores , and known likelihood ratios . A test point has . Compute the weighted -quantile used by weighted conformal at .
Problem
Show that when (no shift) the weighted conformal construction reduces exactly to the split conformal construction of the previous page, including the ceiling quantile level. Identify where the in the denominator of split conformal comes from in the weighted derivation.
Problem
State the Lei-Candès (2021) construction for ITE prediction intervals using weighted conformal with estimated propensity. Identify what happens to coverage when the propensity estimator converges at rate slower than .
Open Problems and Frontier
Weighted conformal under label shift rather than covariate shift is harder because is unobserved at test time; current work uses label-conditional scores or EM-style re-estimation.
Combining weighted conformal with anytime-valid inference (Koning, van Meer 2025 style) opens prediction sets that remain valid under sequential peeking and distribution drift simultaneously; the distribution-shift version is not yet fully written down.
Tight lower bounds on coverage loss as a function of density-ratio estimation error. Current guarantees are loose and there is room for sharp finite-sample rates.
Conditional coverage under covariate shift inherits the impossibility result from the split conformal page (Barber, Candès, Ramdas, Tibshirani 2021). Partial guarantees under smoothness or localizability are the current line of attack.
High-dimensional density-ratio estimation is the practical bottleneck. Methods based on discriminative classifiers, score matching, and kernel-based estimators trade off sample complexity against bias in ways that matter for the coverage product-rate.
References
Canonical:
- Tibshirani, Barber, Candès, Ramdas, "Conformal Prediction Under Covariate Shift." NeurIPS 2019. The founding paper.
- Lei, Candès, "Conformal Inference of Counterfactuals and Individual Treatment Effects." Journal of the Royal Statistical Society B 83(5) (2021), 911-938.
- Barber, Candès, Ramdas, Tibshirani, "Conformal Prediction Beyond Exchangeability." Annals of Statistics 51(2) (2023), 816-845.
Robustness and sensitivity:
- Jin, Ren, Candès, "Sensitivity Analysis of Individual Treatment Effects: A Robust Conformal Inference Approach." Proceedings of the National Academy of Sciences 120(6) (2023).
- Yang, Kuchibhotla, Tchetgen Tchetgen, "Doubly Robust Calibration of Prediction Sets Under Covariate Shift." Journal of the Royal Statistical Society Series B (2024; arXiv:2203.01761). Authors are Yachong Yang, Arun Kumar Kuchibhotla, and Eric Tchetgen Tchetgen.
E-values and anytime-valid conformal:
- Gauthier, Bach, Jordan, "E-Values Expand the Scope of Conformal Prediction" (2025; arXiv:2503.13050). Batch anytime-valid conformal constructions using e-values.
- Koning and van Meer (2025), on fuzzy e-values and decision-theoretic conformal sets — connects e-values to fuzzy prediction sets and downstream decisions. Complementary to the Gauthier-Bach-Jordan construction. Locate via the authors' personal pages or Google Scholar; the arXiv ID is intentionally not asserted here.
Density-ratio estimation:
- Sugiyama, Suzuki, Kanamori, Density Ratio Estimation in Machine Learning (Cambridge University Press, 2012). Chapters 3-5.
Background:
- Angelopoulos, Bates, "A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification." Foundations and Trends in Machine Learning 16(4) (2023). Chapter 5.
Next Topics
- Double/debiased machine learning: the causal-inference companion; shares nuisance estimation problems.
- E-values and anytime-valid inference: sequential validity under peeking.
- Calibration and uncertainty: Platt, isotonic, temperature scaling and why conformal is distinct.
Last reviewed: April 24, 2026
Canonical graph
Required before and derived from this topic
These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.
Required prerequisites
4- Radon-Nikodym and Conditional Expectationlayer 0B · tier 1
- Importance Samplinglayer 2 · tier 1
- Split Conformal Predictionlayer 2 · tier 1
- Causal Inference Basicslayer 3 · tier 3
Derived topics
2- Double/Debiased Machine Learninglayer 3 · tier 1
- E-Values and Anytime-Valid Inferencelayer 3 · tier 1
Graph-backed continuations