Concentration Probability
Bernstein Inequality
A variance-sensitive concentration inequality for independent bounded random variables. Bernstein sharpens Hoeffding when the variance is much smaller than the worst-case range.
Prerequisites
Why This Matters
Hoeffding's inequality uses only the range of each summand. That is safe, but it can be loose when the summands are usually small. Bernstein's inequality adds one more piece of information: the variance. The reward is a tighter tail bound in low-variance or sparse-loss regimes.
This is the concentration inequality behind many variance-sensitive learning bounds. If a classifier makes rare errors, or a loss is usually near zero, a range-only bound treats the problem as if worst-case losses happen often. Bernstein separates two facts:
- the variables are bounded, so very large jumps cannot happen;
- the variance is small, so moderate deviations are rarer than Hoeffding predicts.
Mental Model
Bernstein has two regimes.
| Deviation size | Dominant term | Tail shape | Interpretation |
|---|---|---|---|
| Small | variance | Gaussian-like concentration with the true variance | |
| Large | range | sub-exponential tail caused by bounded but nonzero jump size |
Hoeffding behaves as if the variance were always near the worst-case range variance. Bernstein uses the actual variance proxy.
Formal Setup
Let be independent real-valued random variables. Write , assume each centered variable is bounded by :
and define the variance proxy
The quantity of interest is the centered sum
Scalar Bernstein Inequality
Statement
If are independent, , almost surely, and , then for every :
A two-sided version follows by applying the same bound to and using a union bound:
Intuition
The denominator has two terms. The variance term controls ordinary fluctuations near the mean. The bounded-jump term prevents the bound from pretending the tail is Gaussian forever. Far enough from the mean, a single large-but-bounded jump becomes the limiting obstruction.
Proof Sketch
Use the Chernoff method. For :
Independence factors the moment generating function into a product. A Bernstein MGF lemma bounds each centered bounded summand:
for . Multiplying over gives:
Optimizing the resulting exponent over gives the displayed Bernstein bound.
Failure Mode
The bounded-deviation assumption is doing real work. If the variables are unbounded but merely have finite variance, Bernstein does not apply. Use Chebyshev, a sub-exponential condition, or a truncation argument instead.
Bernstein Bound for a Sample Mean
Statement
If are i.i.d., , almost surely, and , then:
Why It Matters
This is the form used in statistics and learning theory. It says the sample mean concentrates at a rate controlled by the true variance for small , while still respecting the bounded range for larger deviations.
Bernstein vs. Hoeffding
For , Hoeffding gives:
Bernstein gives:
If is near , the two bounds are comparable up to constants. If , Bernstein can be much tighter.
Suppose , , and . To bound , Hoeffding uses only the range and behaves like . Bernstein uses the variance and behaves like under the common two-sided sample-mean form. The exponent is about 19 times larger, so the required sample size is far smaller.
Numeric Comparison: Sample Sizes Across Variance Regimes
How much does Bernstein actually buy you over Hoeffding? Suppose i.i.d., target deviation , confidence . The minimum sample size to certify via each bound:
| Hoeffding () | Bernstein (variance-sensitive) | Ratio | |
|---|---|---|---|
| (Bernoulli at ) | |||
| (sparse loss, ) | |||
| (very sparse, ) |
Three observations:
- Hoeffding ignores variance. The figure is constant across rows because Hoeffding only uses the range .
- Bernstein scales with . For sparse losses (small mean, small variance), Bernstein needs a fraction of the samples Hoeffding does.
- The break-even is around . For Bernoulli, the variance equals the worst-case range variance, so Bernstein's edge collapses. This is the regime where Hoeffding is essentially optimal.
The takeaway: Bernstein is the right default when losses are bounded and usually small. In learning theory this is standard for cross-entropy under low-noise data, in PAC-Bayes for stochastic predictors with low empirical risk, and in variance-reduced Monte Carlo estimators.
The Empirical Bernstein Inequality
The classical Bernstein bound requires knowing the true variance . In practice you only have the empirical variance . The empirical Bernstein inequality (Maurer-Pontil 2009) lets you plug in at the cost of a small extra term.
Empirical Bernstein Inequality (Maurer-Pontil 2009)
Statement
For i.i.d. , with probability at least :
The first term is Bernstein's variance-sensitive contribution with replaced by the empirical variance. The second term is the unavoidable price of estimating the variance from data.
Intuition
Two random objects ( and ) need to concentrate simultaneously. The tail bound for contributes the correction. For large , the term dominates and the bound matches the population-variance Bernstein bound up to constants.
Why It Matters
This is what people use in practice: confidence intervals from a single sample without needing prior knowledge of . It powers data-dependent generalization bounds (variance-sensitive PAC-Bayes), Monte Carlo confidence intervals, and adaptive bandit algorithms (UCB-V).
Failure Mode
The constant (from Maurer-Pontil) is not sharp. Refinements (Audibert-Munos-Szepesvari 2009) tighten it. Also: the bound is data-dependent only in the variance term; the range still appears explicitly. For unbounded data with finite variance, see self-normalized inequalities (de la Peña, Lai, Shao 2009).
Common Confusions
Bernstein is not always smaller than Hoeffding
Bernstein is variance-sensitive, not magic. Constants matter. At small sample sizes or when the variance is not much smaller than the worst-case range variance, Hoeffding can be just as useful.
The variance must be controlled
The bound needs a variance proxy. If you estimate variance from the same sample, that estimate needs its own concentration argument. Plugging in an empirical variance without accounting for its error is a different theorem.
Bounded range and sub-exponential are related but not identical
The scalar Bernstein inequality above assumes bounded centered deviations. The sub-exponential Bernstein bound replaces boundedness with an MGF condition that holds near zero. The two statements have similar two-regime shapes, but their assumptions are not the same.
Exercises
Problem
Let be i.i.d. with . Use the sample mean Bernstein bound to upper-bound .
Problem
Explain why Bernstein is often the right bound for finite-class generalization when the loss is bounded and usually near zero.
Related Comparisons
References
Canonical:
- Bernstein, S. N. (1924). On a modification of Chebyshev's inequality and on the error in Laplace's formula. Collected Works (in Russian), IV (1964), 71-79. The original derivation of variance-sensitive concentration inequalities.
- Boucheron, S., Lugosi, G., & Massart, P. (2013). Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press. Chapters 2-3 give the modern Chernoff-method derivation and the comparison with Bennett's inequality.
- Massart, P. (2007). Concentration Inequalities and Model Selection. Springer (Saint-Flour Lecture Notes 33). Chapter 2 covers Bernstein in the context of empirical-process bounds.
Current:
- Vershynin, R. (2018). High-Dimensional Probability. Cambridge University Press. Chapter 2 develops Bernstein for sub-exponential variables with sharp constants.
- Wainwright, M. J. (2019). High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press. Chapter 2 connects Bernstein to MGF-based concentration and to matrix concentration.
- Mohri, M., Rostamizadeh, A., & Talwalkar, A. (2018). Foundations of Machine Learning (2nd ed.). MIT Press. Chapters 2-3 use Bernstein for variance-sensitive PAC bounds.
Empirical Bernstein:
- Maurer, A., & Pontil, M. (2009). Empirical Bernstein bounds and sample-variance penalization. COLT 2009. The empirical-Bernstein theorem: replace with at the cost of a correction.
- Audibert, J.-Y., Munos, R., & Szepesvári, C. (2009). Exploration-exploitation tradeoff using variance estimates in multi-armed bandits. Theoretical Computer Science, 410(19), 1876-1902. Tighter constants in the empirical-Bernstein bound; introduces UCB-V.
Next Topics
- Sub-exponential random variables: the distributional class behind Bernstein-type tails
- Matrix concentration: Bernstein bounds for random matrices
- Uniform convergence: where Hoeffding and Bernstein feed learning bounds
Last reviewed: May 4, 2026
Canonical graph
Required before and derived from this topic
These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.
Required prerequisites
5- Expectation, Variance, Covariance, and Momentslayer 0A · tier 1
- Concentration Inequalitieslayer 1 · tier 1
- Hoeffding's Lemmalayer 1 · tier 1
- Bennett's Inequalitylayer 2 · tier 1
- Moment Generating Functionslayer 0A · tier 2
Derived topics
3- Sub-Exponential Random Variableslayer 2 · tier 1
- Uniform Convergencelayer 2 · tier 1
- Matrix Concentrationlayer 3 · tier 1
Graph-backed continuations