The Hoeffding’s inequality is a crucial result in probability theory as it provides an *upper bound* on the probability that the sum of a sample of independent random variables deviates from its expected value.

Though the bound holds for general independent random variables (not necessarily identically distributed) with some slight modifications, here we reduce to the special case of considering a sample of independent and identically distributed (i.i.d.) Bernoulli random variables , that is a Bernoulli process. By definition, the outcome of each variable is either (failure) or (success), namley . Moreover, the probability of success (resp., failure) is described by the same parameter (resp., ) for all the variables . More formally, we state that , where denotes the Bernoulli distribution and is the parameter governing the distribution.

Generally speaking, if we want to provide a (single-point) estimate of the true population parameter using the sample of observed Bernoulli random variables we just need to measure the frequency of successes out of the total number of trials, that is how many variables out of did output . This is often referred as the Maximum Likelihood Estimation (MLE) as it provides an estimate which maximizes the probability of seeing the observed data.

Note also that and can be considered as the sample mean and the population mean, respectively.

Hence, the (one-sided) inequality proposed by Hoeffding states that:

By symmetry, the above inequality is also valid for the other side of the difference:

Eventually, adding up the two one-sided inequalities we obtain the two-sided inequality as follows:

More generally, the Hoeffding’s inequality states that the probability of the estimation error being greater than a positive constant is bounded to the quantity

Intuitively, we want the estimate being close to the true . Therefore, for a fixed and very small , we want the above probability small as well, namely we would like to be close to !

If we look carefully at the inequality we notice that the probability doesn’t depend on at all, which is nice. Also notice that it decays exponentially with which is also very good: as one might expect the more observations are collected the more likely the probability of a bad estimation error would be small.

However, the exponential function includes also the squared term which could go rapidly to as is getting smaller, thereby causing the overall power of the exponential function going to . That means that the “price” we have to pay in terms of number of observations needed if we want a very restrictive bound on the error could be very high.

In the unlucky edge case where , the bound is in fact practically useless since it would simply tell us that:

which is something that we already know from the basic axioms of probability! (In fact, those axioms provide even stricter bounds since the probability of any event to occur ranges always between and ).

The probability described by the Hoeffding’s inequality can be interpreted as the level of significance (i.e. the upper bound on the probability of making an error) for a confidence interval around of size .

Solving the equation above for gives us the number of observations (or the number of times the Bernoulli trial should be repeated) in order for with confidence level.

Applying the to both sides of the inequality above results into

Therefore, this gives us a lower bound on the number of observations we would need in order to be at least confident that our estimate differs *at most * from the true value .

As an example, consider how large the sample size would have to be at least if we want to be (at least) confident (i.e. ) that our estimate is correct within an error . According to the result above, it mush hold

So, with observations we can be at least confident that our estimation error is bound by . Of course, if we would like a smaller error bound and/or a smaller significance level (resp., higher confidence level) should increase accordingly.

Very briefly Explained