Stochastic Processes and Financial Mathematics
(part one)
6.2 The monotone convergence theorem
A natural question to ask is, when does \(\E [X_n]\to \E [X]\)? This is not a mode of convergence, but simply a practical question.
We are interested (for use later on in the course) to ask when almost sure convergence implies that \(\E [X_n]\to \E [X]\). As we can see from Example 6.1.1, in general it does not. We need some extra conditions:
-
Theorem 6.2.1 (Monotone Convergence Theorem) Let \((X_n)\) be a sequence of random variables and suppose that:
-
1. \(X_{n+1}\geq X_n\), almost surely, for all \(n\).
-
2. \(X_n\geq 0\), almost surely, for all \(n\).
Then, there exists a random variable \(X\) such that \(X_n\stackrel {a.s.}{\to }X\). Further, \(\E [X_n]\to \E [X]\).
-
The first claim of the theorem, that the sequence \(X_n\) converges almost surely, is true because by property 1 the sequence \(X_n\) is increasing, almost surely, and increasing sequences of real numbers converge1
Since limits preserve weak inequalities and \(X_n\geq 0\) we have \(X\geq 0\), almost surely. However, we should not forget that \(\E [X]\) might be equal to \(+\infty \).
You can think of Theorem 6.2.1 as a stochastic equivalent of the fact that increasing real valued sequences converge (in \(\R \) if they are bounded, and to \(+\infty \) if they are not bounded). This is usually helpful when applying the theorem, such as in the following example.
Let \((X_n)\) be a sequence of independent random variables, with distribution given by
\[ X_i= \begin {cases} 2^{-i} & \text { with probability }\frac {1}{2}\\ 0 & \text { with probability }\frac {1}{2}.\\ \end {cases} \]
Let \(Y_n=\sum _{i=1}^n X_i\). Then \((Y_n)\) is an increasing sequence, almost surely, and hence converges almost surely to the limit \(Y=\sum _{i=1}^\infty X_i\).
Since also \(Y_n\geq 0\), we can apply the monotone convergence theorem to \((Y_n)\) and deduce that \(\E [Y_n]\to \E [Y]\). By linearity of \(\E \), and geometric summation, we have that
\[\E [Y_n]=\sum \limits _{i=1}^n \E [Y_i]=\sum \limits _{i=1}^n \frac {1}{2}\frac {1}{2^i}=\frac {1}{2}\frac {\frac 12-(\frac 12)^{n+1}}{1-\frac {1}{2}}\]
This converges to \(\frac 12\) as \(n\to \infty \), so we deduce that \(\E [Y]=\frac 12\). We’ll investigate this example further in exercise 6.5. In fact, \(X_i\) corresponds to the \(i^{th}\) digit in the binary expansion of \(Y\).
1 \(\offsyl \) More precisely: we have \(\P \l [\lim _{n\to \infty }X_n(\omega )\text { exists}\r ]=1\) and for \(\omega \) in this set we can define \(X(\omega )=\lim _{n\to \infty }X_n(\omega )\). We don’t care about \(\omega \) for which the limit doesn’t exist, because this has probability zero! We could set \(X(\omega )=0\) in such cases, and it won’t affect any probabilities involving \(X\). See MAS31002/61022 for details.