Category Archives: Probability Theory

Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of large sets of data.

The Representability of the Relative Entropy

Definition (Relative entropy): The relative entropy between two discrete probability distributions on the probability simplex \mc P is defined to be the positive quantity

    \[   D(\mb P', \mb P) = \sum_{i\in \Xi} \mb P'(i) \log \left( \frac{\mb P'(i)}{\mb P(i)} \right). \]

The relative entropy is only finite when the distribution \mb P' is absolutely continuous with respect to the distribution \mb P. Whenever \mb P'(i) is zero, the contribution of the ith term is taken to be zero as \lim_{x\to 0} x \log x = 0.

Properties (Relative entropy): The relative entropy enjoys the following properties:

  1. Information inequality: D(\mb P', \mb P)\geq 0 for all \mb P', \mb P in \mc P, while D(\mb P', \mb P)=0 if and only if \mb P'=\mb P.
  2. Convexity: D(\mb P', \mb P) is convex in (\mb P', \mb P) \in \mc P\times \mc P.
  3. Lower semicontinuity: D(\mb P', \mb P) is lower semicontinuous in (\mb P', \mb P) \in \mc P\times \mc P.

The relative entropy defines two convex measures of distance on the unit simplex. Indeed, we can define in this context two distinct kinds of pseudo balls due to the asymmetry of the relative entropy. The sublevel set of the first kind

    \[   \mb B^1_r(\mb P) \defn \set{\mb P'\in\mc P}{D(\mb P', \mb P)\leq r}  \]

and the sublevel set of the second kind

    \[ \mb B^2_r(\mb P') \defn \set{\mb P\in\mc P}{D(\mb P', \mb P)\leq r} \]

indeed characterize two distinct convex geometries of distance on the unit simplex. From the convexity property of the relative entropy it follows that either kind of pseudo ball \mb B^1_r(\mb P) or \mb B^2_r(\mb P') is convex for any positive r.

Pseudo balls of the first kind

Pseudo balls \mb B^1_r(\mb P) of the first kind were encountered previously in the context of Sanov’s theorem. The figure below illustrates the pseudo balls of the first kind \mb B^1_r(\mb P) for various r.

Relative entropy contours on the unit simplex.
Relative entropy ball of the first kind

Theorem (Pseudo balls of the first kind): Pseudo balls of the first kind \mb B^1_r(\mb P) can be characterized as

    \[   \mb B^1_r(\mb P) = \{\mb P'\in \mc P : H(\mb P') \leq r + \textstyle\sum_{i\in\Xi} \mb P'(i) \log \mb P(i) \} \]

using the entropy function H(\mb P') \defn \sum_{i\in \Xi} \mb P'(i)\log \mb P'(i).

The entropy function H(\mb P') is a positive convex function which can be canonically represented using the exponential cone. We will see that the pseudo balls \mb B^2_r(\mb P') of the second kind admit a representation in terms of the geometric mean.

Pseudo balls of the second kind

The figure below illustrates the pseudo balls of the second kind \mb B^2_r(\mb P') for various r.

Relative entropy ball of the second kind

In practice, pseudo balls of the second type \mb B^2_r(\mb P') are mostly encountered around empirical distributions

    \[   \mb P'(i)=\textstyle\frac1T \sum_{k=1}^T \mb 1_{\xi_k=i} \]

of T data samples (\xi_1, \dots, \xi_T). In this case, the elements of the probability vector \mb P' are fractions with T as a common denominator. The set of all such distributions \mb P' is denoted further as \mc P_T.

Theorem (Pseudo balls of the second kind): Pseudo balls of the second kind \mb B^2_r(\mb P) around distributions \mb P' \in \mc P_T can be characterized as

    \[   \mb B^2_r(\mb P') = \{\mb P\in \mc P : (\textstyle\prod_{i\in\Xi} \mb P(i)^{T\cdot\mb P'(i)})^{1/T} \geq e^{-(r - H(\mb P'))} \}. \]

Note that as \mc P_T becomes dense in \mc P with increasing T, the previous theorem can be used to a construct second-order cone representation (of arbitrary precision) of the pseudo balls \mb B^2_r(\mb P') for any \mb P' in \mc P. Indeed, the function (\prod_{i\in\Xi} \mb P(i)^{T\cdot\mb P'(i)})^{1/T} is recognized as a geometric mean which is a positive concave function and is canonically represented using the second-order cone.

Csiszar and Sanov

It turns out that the Chernoff bound previously discussed has a neat generalization in large deviation theory. In this setting the object under study is the entire empirical distribution \hat{\mb P}\defn \frac{1}{n} \sum_{i=1}^n \delta_{Y_i} instead of merely the empirical average of the data. Now we can study extreme events in which the empirical distribution realizes in a convex set \mc C.

Csizar’s Theorem

Let Y_i be data samples in \Re^d drawn independently and identically from the distribution \mb P_0 with mean \mu_0. An important class of an undesirable events can be expressed as the empirical average of data Y_i realizing in a closed convex set \mc C. The closed convex set \mc C is a subset of the probability simplex \mc P_d of distributions on \Re^d. For the topology employed here and further technical conditions on the set \mc C, the reader is referred to the seminal paper of Csiszar. For the sake of readability, these technical points are supressed here.

Csizar’s Theorem : The probability that the empirical distribution of Y_i realizes in a closed convex set \mc C is bounded by

(1)   \begin{equation*}   \log \mb P_0^n \left( \hat{\mb P} \in \mc C \right) \leq -n \cdot \textstyle\inf_{\mb P\in \mc C} \, D(\mb P, \mb P_0) \end{equation*}

where D(\mb P, \mb P_0) is the Kullback-Leibler divergence between \mb P and \mb P_0.

Geometric Interpretation

Csizar’s theorem can be seen to subsume Chernoff’s result by recognizing that the empirical average \frac1n \sum_{i=1}^n Y_i realizing in closed convex set C \subset \Re^d is stated equivalently as the empirical distribution \hat{\mb P} realizing in the closed convex set \set{\mb P}{\int y \, \mb P(\d y) \in C}. In fact both bounds are exactly the same as it can be shown that

    \[   \inf_{\mu\in C} \Lambda^\star(\mu-\mu_0) = \inf_{\mb P} \{D(\mb P, \mb P_0) : \textstyle\int y \, \mb P(\d y) \in C\}, \]

where \Lambda^\star is the convex dual of the log moment generating function \Lambda(\lambda) \defn \log \mb E_{\mb P}[e^{\lambda\tpose Y}] of the distribution \mb P of Y_i - \mu_0. Observe that Csizar’s inequality (1) admits a nice geometrical interpretation as done in the figure below. The probability of the empirical distribution realizing in a set \mc C is bounded above by its distance as measured by the Kullback-Leibler divergence from the distribution generating the data \mb P and the set of extreme events \mc C.

As the Kullback-Leibler divergence is jointly convex in both its arguments, Csizar’s bound (1) is stated in terms of a convex optimization problem. That is, a convex optimization problem over distributions on \Re^n, rather than vectors in \Re^n.

Rendered by QuickLaTeX.com

Sanov’s Theorem

Sanov’s bound is furthermore exponentially tight as it correctly identifies the exponential rate \inf_{\mb P} \set{D(\mb P, \mb P_0)}{\mb P \in \mc C} with which the probability of the event \hat{\mb P}\in \mc C diminishes to zero.

Sanov’s Theorem : The probability that the empirical distribution of Y_i realizes in a open convex set \mc C diminishes with exponential rate

(2)   \begin{equation*}   \frac{1}{n} \log \mb P_0^n \left( \hat{\mb P} \in \mc C \right) \to - \textstyle\inf_{\mb P \in \mc C} \, D(\mb P, \mb P_0). \end{equation*}

The previous makes that the Chernoff and Csizar inequality accurately quantify the probability of extreme events taking place. Furthermore, due to the convexity of the objects involved, computing these bounds amounts to solving a convex optimization problem.

References

  1. Csiszár, I. “Sanov property, generalized I-projection and a conditional limit theorem.” The Annals of Probability (1984): 768-793.

Chernoff and Cramer

When working with statistical data, it is often desirable to be able to quantify the probability of certain undesirable events taking place. In this post we discuss an interesting connection between convex optimization and extreme event analysis. We start with the classical Chernoff bound for the empirical average.

Chernoff’s Bound

Let Y_i be data samples in \Re^d drawn independently and identically from the distribution \mb P_0 with mean \mu_0. An important class of an undesirable events can be expressed as the empirical average of data Y_i realizing in a convex set C. When for instance Y_i have an interpretation as losses, then knowing the probability of the average loss exceeding a critical value t is paramount. In that case, knowing the probability that the empirical average realizes in the half space \set{y}{\sum_{i=1}^n y_i \geq t} would be of great interest. Chernoff’s classical inequality quantifies the probability of such events quite nicely.

Chernoff’s Theorem : The probability of the empirical average \hat \mu of Y_i realizes in a closed set C is

(1)   \begin{equation*}   \log \mb P_0^n \left(\hat \mu \in C \right) \leq -n \cdot \textstyle\inf_{\mu \in C}\Lambda^\star(\mu-\mu_0) \end{equation*}

with \Lambda^\star(\mu) \defn \sup_{\lambda\in\Re^d} \, \lambda\tpose \mu - \Lambda(\lambda) the convex dual of the log moment generating function \Lambda(\lambda)\defn \log \mb E_{\mb P}[e^{\lambda\tpose Y}] and \mb P the distribution of Y_i-\mu_0.

Proof: Let \lambda\in\Re^d and t\in\Re, and consider the positive function f(\mu)\defn e^{n\lambda\tpose \mu + t}. If the function f satisfies f(\mu)\geq 1 for all \mu in C, then we may conclude that

    \[   \mb P_0^n(\hat \mu \in C) \leq  \E{\mb P_0^n}{f(\textstyle\frac1n \sum_{i=1}^n Y_i)}. \]

Using the independence of distinct samples Y_i and taking the logarithm on both sides of the previous inequality establishes

    \begin{align*}    \log\mb P_0^n(\hat \mu \in C) \leq & \, t + n \cdot \E{\mb P_0}{e^{\lambda\tpose Y}},\\    \leq &  \, t + n \cdot \lambda\tpose \mu_0 + n \cdot \Lambda(\lambda). \end{align*}

It is clear that f(\mu)\geq 1 for all \mu in C if and only if -n \lambda\tpose \mu\leq t for all \mu in C. From this we obtain the general form of Chernoff’s bound

    \begin{align*}   \log\mb P_0^n(\hat \mu \in C) \leq & - n \cdot {\sup_{\lambda \in \Re^d} }\inf_{\mu \in C} \, \lambda\tpose (\mu-\mu_0) - \Lambda(\lambda), \\   \leq & -n \cdot \textstyle \inf_{\mu \in C} \Lambda^\star(\mu-\mu_0). \end{align*}

The last inequality follows from the minimax theorem for convex optimization.

Geometric Interpretation

The Chernoff bound (1) expresses the probability of the extreme event \sum_{i=1}^n Y_i \in C in terms of the convex conjugate of the log moment generating function. This makes that computing Chernoff’s bound can be done by solving a convex optimization problem.

The function \Lambda(\lambda) = \log \mb E_{\mb P}[e^{\lambda\tpose Y}] is the log moment generating function of the recentered distribution \mb P generating the data and is always convex. We give the cumulant generating function for some common standardized (zero mean, unit variance) distributions in the table below.

\mb P \Lambda(\lambda) dom \Lambda
Normal \frac{1}{2}\lambda\tpose \lambda \Re^d
Laplace \log \left(\frac{2}{2-\lambda\tpose\lambda}\right) \lambda\tpose\lambda <2

Furthermore, it comes with a nice geometric interpretation as well. The function \Lambda^\star(\mu - \mu_0) is positive and convex and thus defines a distance between the mean of the distribution generating the data and its empirical mean.

Rendered by QuickLaTeX.com

The minimum distance r as measured by the convex dual of the log moment generating function \Lambda^\star between the mean \mu_0 of the distribution generating the data and the set C bounds the probability of the event \hat \mu \in C taking place.

Cramer’s Theorem

Chernoff’s bound is furthermore exponentially tight as it correctly identifies the exact exponential rate with which the probability of the event \hat{\mu}\in C diminishes to zero. The last surprising fact is codified in Cramer’s theorem.

Cramer’s Theorem : Assume that the distribution \mb P_0 satisfies 0 \in \dom \Lambda. Then the probability that the empirical average of Y_i realizes in a open set C diminishes with exponential rate

(2)   \begin{equation*}   \liminf_{n\to\infty}\frac{1}{n} \log \mb P_0^n \left( \hat{\mu} \in C \right) \geq - \textstyle\inf_{\mu \in C} \, \Lambda^\star(\mu- \mu_0). \end{equation*}

The lower bound in Cramer’s thorem makes that the Chernoff inequality accurately quantifies the probability of extreme events taking place as the number of samples n tends toward infinity. Notice that for the Cramer’s theorem convexity of the set is not a requirement. Note that 0 \in \dom \Lambda as by definition of \mb P_0 being a probability distribution it follows that \Lambda(0) = 0. The condition 0 \in \dom \Lambda requires the distribution \mb P to be light-tailed. The tails of the distribution \mb P must diminish at an exponential rate.

References

  1. A. Dembo, and O. Zeitouni. “Large Deviations Techniques and Applications”, Springer (2010).
  2. S. Boyd, and L. Vandenberghe. “Convex Optimization”, Cambridge University Press (2004).