Stochastic Processes and Financial Mathematics
(part one)
2.3 Infinite \(\Omega \)
So far we focused on finite sample spaces of the form \(\Omega =\{x_1,x_2,\ldots x_n\}\). In such a case we would normally take \(\mc {F}=\mc {P}(\Omega )\), which is also a finite set. Since \(\mc {F}\) contains every subset of \(\Omega \), any \(\sigma \)-field on \(\Omega \) is a sub-\(\sigma \)-field of \(\mc {F}\). We have seen how it is possible to construct other \(\sigma \)-fields on \(\Omega \) too.
In this case we can define a probability measure on \(\Omega \) by choosing a finite sequence \(a_1,a_2,\ldots ,a_n\) such that each \(a_i\in [0,1]\) and \(\sum _1^n a_i=1\). We set \(\P [x_i]=a_i\). This naturally extends to defining \(\P [A]\) for any subset \(A\sw \Omega \), by setting
\(\seteqnumber{0}{2.}{2}\)\begin{equation} \label {Pfinitedef} \P [A]=\sum _{\{i\-x_i\in A\}} \P [x_i]=\sum _{\{i\-x_i\in A\}} a_i. \end{equation}
It is hopefully obvious (and tedious to check) that, with this definition, \(\P \) is a probability measure. Consequently \((\Omega ,\mc {F},\P )\) is a probability space.
All our experiments with (finitely many tosses/throws of) dice and coins fit into this category of examples. In fact, if our experiment has countably many outcomes, say, \(\Omega =\{x_1,x_2,\ldots \}\) we can still work in much the same way, and the sum in (2.3) will become an infinite series that sums to \(1\).
However, the theory of stochastic processes, as well as most sophisticated examples of stochastic models, require an uncountable sample space. In such cases, we can’t use (2.3), because there is no such thing as an uncountable sum.
An example with uncountable \(\Omega \)
We now flex our muscles a bit, and look at an example where \(\Omega \) is uncountable. We toss a coin infinitely many times, then \(\Omega =\{H,T\}^\N \), meaning that we write an outcome as a sequence \(\omega =\omega _1,\omega _2,\ldots \) where \(\omega _i\in \{H,T\}\). The set \(\Omega \) is uncountable.
We define the random variables \(X_n(\omega )=\omega _n\), so as \(X_n\) represents the result (\(H\) or \(T\)) of the \(n^{th}\) throw. We take
\[ \F = \sigma (X_1,X_2,\ldots ) \]
i.e. \(\F \) is the smallest \(\sigma \)-field with respect to which all the \(X_n\) are random variables. Then
\(\seteqnumber{0}{2.}{3}\)\begin{eqnarray*} \sigma (X_1) &=& \{\emptyset ,\{H\star \star \star \ldots \},\{T\star \star \star \ldots \},\Om \} \\ \sigma (X_1,X_2) &=& \sigma \Big (\{HH\star \star \ldots \},\{TH\star \star \ldots \}, \{HT\star \star \ldots \},\{TT\star \star \ldots \} \Big ) \\ &=& \Big \{\emptyset , \{HH\star \star \ldots \},\{TH\star \star \ldots \},\{HT\star \star \ldots \},\{TT\star \star \ldots \},\\ && \{H\star \star \star \ldots \},\{T\star \star \star \ldots \},\{\star H\star \star \ldots \},\{\star T\star \star \ldots \}, \left \{\begin{array}{l} HH\star \star \ldots \\TT\star \star \ldots \end {array} \right \}, \left \{\begin{array}{l} HT\star \star \ldots \\TH\star \star \ldots \end {array} \right \}, \\ && \{HH\star \star \ldots \}^c,\{TH\star \star \ldots \}^c,\{HT\star \star \ldots \}^c,\{TT\star \star \ldots \}^c, \Om \Big \}, \end{eqnarray*}
where \(\star \) means that it can take on either \(H\) or \(T\), so \(\{H\star \star \star \ldots \} = \{\om : \om _1=H \}\).
With the information available to us in \(\sigma (X_1,X_2)\), we can distinguish between \(\om \)’s where the first or second outcomes are different. But if two \(\om \)’s have the same first and second outcomes, they fall into exactly the same subset(s) of \(\sigma (X_1,X_2)\). Consequently, if a random variable depends on anything more than the first and second outcomes, it will not be \(\sigma (X_1,X_2)\) measurable.
It is not immediately clear if we can define a probability measure on \(\mc {F}\)! Since \(\Omega \) is uncountable, we cannot use the idea of (2.3) and define \(\P \) in terms of \(\P [\omega ]\) for each individual \(\omega \in \Omega \). Equation (2.3) simply would not make sense; there is no such thing as an uncountable sum.
To define a probability measure in this case requires a significant amount of machinery from measure theory. It is outside of the scope of this course. For our purposes, whenever we need to use an infinite \(\Omega \) you will be given a probability measure and some of its helpful properties. For example, in this case there exists a probability measure \(\P :\mc {F}\to [0,1]\) such that
-
• The \(X_n\) are independent random variables.
-
• \(\P [X_n=H]=\P [X_n=T]=\frac {1}{2}\) for all \(n\in \N \).
From this, you can work with \(\P \) without having to know how \(\P \) was constructed. You don’t even need to know exactly which subsets of \(\Omega \) are in \(\mc {F}\), because Proposition 2.2.6 gives you access to plenty of random variables.
-
Remark 2.3.1 \(\offsyl \) In this case it turns out that \(\mc {F}\) is much smaller than \(\mc {P}(\Omega )\). In fact, if we tried to take \(\mc {F}=\mc {P}(\Omega )\), we would (after some significant effort) discover that there is no probability measure \(\P :\mc {P}(\Omega )\to [0,1]\) (i.e. satisfying Definition 2.1.3) in which the coin tosses are independent. This is irritating, and surprising, and we just have to live with it.
Almost surely
In the example from Section 2.3 we used \(\Omega =\{H,T\}^\N \), which is the set of all sequences made up of \(H\)s and \(T\)s. Our probability measure was independent, fair, coin tosses and we used the random variable \(X_n\) for the \(n^{th}\) toss.
Let’s examine this example a bit. First let us note that, for any individual sequence \(\omega _1,\omega _2,\ldots \) of heads and tails, by independence
\[\P [X_1=\omega _1, X_2=\omega _2, \ldots ]=\frac 12\cdot \frac 12\cdot \frac 12\ldots =0.\]
So every element of \(\Omega \) has probability zero. This is not a problem – if we take enough elements of \(\Omega \) together then we do get non-zero probabilities, for example
\[\P [X_1=H]=\P \l [\omega \in \Omega \text { such that }\omega _1=H\r ]=\frac 12\]
which is not surprising.
The probability that we never throw a head is
\[ \P [\text {for all }n, X_n=T]=\frac 12\cdot \frac 12\ldots =0 \]
which means that the probability that we eventually throw a head is
\[\P [\text {for some }n, X_n=H]=1-\P [\text {for all }n, X_n=T]=1.\]
So, the event \(\{\text {for some }n, X_n=H\}\) has probability \(1\), but is not equal to the whole sample space \(\Omega \). To handle this situation we have a piece of terminology.
So, we would say that ‘almost surely, our coin will eventually throw a head’. We might say that ‘\(Y\leq 1\)’ almost surely, to mean that \(\P [Y\leq 1]=1\). This piece of terminology will be used very frequently from now on. We might sometimes say that an event ‘almost always’ happens, with the same meaning.
For another example, suppose that we define \(q^H_n\) and \(q^T_n\) to be the proportion of heads and, respectively, tails in the first \(n\) coin tosses \(X_1,X_2,\ldots ,X_n\). Formally, this means that
\[q^H_n=\frac {1}{n}\sum \limits _{i=1}^n\1\{X_i=H\}\quad \text {and}\quad q^T_n=\frac {1}{n}\sum \limits _{i=1}^n\1\{X_i=T\}.\]
Here \(\1\{X_i=H\}\) is equal to \(1\) is \(X_i=H\), and equal to zero otherwise; similarly for \(\1\{X_i=T\}\). We will think a bit more about random variables of this type in the next section. Of course \(q^H_n+q^T_n=1\).
The random variables \(\1\{X_i=H\}\) are i.i.d. with \(\E [\1\{X_i=H\}]=\frac 12\), hence by Theorem 1.1.1 we have \(\P [q^H_n\to \tfrac 12\text { as }n\to \infty ]=1,\) and by the same argument we have also \(\P [q^T_n\to \frac 12\text { as }n\to \infty ]=1\). In words, this means that in the long run half our tosses will be tails and half will be heads (which makes sense - our coin is fair). That is, the event
\[E=\l \{\lim \limits _{n\to \infty }q^H_n=\frac 12\;\text { and }\lim \limits _{n\to \infty }q^T_n=\frac 12\r \}\]
occurs almost surely.
There are many many examples of sequences (e.g. \(HHTHHTHHT\ldots \)) that don’t have \(q^T_n\to \frac 12\) and \(q^H_n\to \frac 12\). We might think of the event \(E\) as being only a ‘small’ subset of \(\Omega \), but it has probability one.