Stochastic Processes and Financial Mathematics
(part one)
2.3 Infinite
So far we focused on finite sample spaces of the form . In such a case we would normally take , which is also a finite set. Since
contains every subset of , any -field on is a sub--field of . We have seen how it is possible to construct other -fields on too.
In this case we can define a probability measure on by choosing a finite sequence such that each and . We set .
This naturally extends to defining for any subset , by setting
It is hopefully obvious (and tedious to check) that, with this definition, is a probability measure. Consequently is a probability space.
All our experiments with (finitely many tosses/throws of) dice and coins fit into this category of examples. In fact, if our experiment has countably many outcomes, say, we can
still work in much the same way, and the sum in (2.3) will become an infinite series that sums to .
However, the theory of stochastic processes, as well as most sophisticated examples of stochastic models, require an uncountable sample space. In such cases, we can’t use (2.3), because there is no such thing as an uncountable sum.
An example with uncountable
We now flex our muscles a bit, and look at an example where is uncountable. We toss a coin infinitely many times, then , meaning that we write an outcome as a sequence
where . The set is uncountable.
We define the random variables , so as represents the result ( or ) of the throw. We take
i.e. is the smallest -field with respect to which all the are random variables. Then
where means that it can take on either or , so .
With the information available to us in , we can distinguish between ’s where the first or second outcomes are different. But if two ’s have the same first and second
outcomes, they fall into exactly the same subset(s) of . Consequently, if a random variable depends on anything more than the first and second outcomes, it will not be
measurable.
It is not immediately clear if we can define a probability measure on ! Since is uncountable, we cannot use the idea of (2.3) and define in terms of for each individual . Equation (2.3) simply would not make sense; there is no such thing as an uncountable sum.
To define a probability measure in this case requires a significant amount of machinery from measure theory. It is outside of the scope of this course. For our purposes, whenever we need to use an infinite you will be given a probability measure and some of its helpful properties. For example, in this case there exists a probability measure such that
From this, you can work with without having to know how was constructed. You don’t even need to know exactly which subsets of are in , because Proposition 2.2.6 gives you access to plenty of random variables.
-
In this case it turns out that is much smaller than . In fact, if we tried to take , we would (after some significant effort) discover that there is no probability measure
(i.e. satisfying Definition 2.1.3) in which the coin tosses are independent. This is irritating, and surprising, and we just have to live
with it.
Almost surely
In the example from Section 2.3 we used , which is the set of all sequences made up of s and s. Our probability measure was
independent, fair, coin tosses and we used the random variable for the toss.
Let’s examine this example a bit. First let us note that, for any individual sequence of heads and tails, by independence
So every element of has probability zero. This is not a problem – if we take enough elements of together then we do get non-zero probabilities, for example
which is not surprising.
The probability that we never throw a head is
which means that the probability that we eventually throw a head is
So, the event has probability , but is not equal to the whole sample space . To handle this situation we have a piece of terminology.
So, we would say that ‘almost surely, our coin will eventually throw a head’. We might say that ‘’ almost surely, to mean that . This piece of terminology will be used very frequently
from now on. We might sometimes say that an event ‘almost always’ happens, with the same meaning.
For another example, suppose that we define and to be the proportion of heads and, respectively, tails in the first coin tosses . Formally, this means that
Here is equal to is , and equal to zero otherwise; similarly for . We will think a bit more about random variables of this type in the next section. Of course
.
The random variables are i.i.d. with , hence by Theorem 1.1.1 we have and by the same argument we have also . In words, this means that in the long run half our tosses will be tails
and half will be heads (which makes sense - our coin is fair). That is, the event
occurs almost surely.
There are many many examples of sequences (e.g. ) that don’t have and . We might think of the event as being only a ‘small’ subset of
, but it has probability one.