last updated: September 19, 2024

Stochastic Processes and Financial Mathematics
(part one)

\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\) \(\newcommand {\intertext }[1]{\text {#1}\notag \\}\) \(\let \Hat \hat \) \(\let \Check \check \) \(\let \Tilde \tilde \) \(\let \Acute \acute \) \(\let \Grave \grave \) \(\let \Dot \dot \) \(\let \Ddot \ddot \) \(\let \Breve \breve \) \(\let \Bar \bar \) \(\let \Vec \vec \) \(\DeclareMathOperator {\var }{var}\) \(\DeclareMathOperator {\cov }{cov}\) \(\def \ra {\Rightarrow }\) \(\def \to {\rightarrow }\) \(\def \iff {\Leftrightarrow }\) \(\def \sw {\subseteq }\) \(\def \wt {\widetilde }\) \(\def \mc {\mathcal }\) \(\def \mb {\mathbb }\) \(\def \sc {\setminus }\) \(\def \v {\textbf }\) \(\def \p {\partial }\) \(\def \E {\mb {E}}\) \(\def \P {\mb {P}}\) \(\def \R {\mb {R}}\) \(\def \C {\mb {C}}\) \(\def \N {\mb {N}}\) \(\def \Q {\mb {Q}}\) \(\def \Z {\mb {Z}}\) \(\def \B {\mb {B}}\) \(\def \~{\sim }\) \(\def \-{\,;\,}\) \(\def \|{\,|\,}\) \(\def \qed {$\blacksquare $}\) \(\def \1{\unicode {x1D7D9}}\) \(\def \cadlag {c\`{a}dl\`{a}g}\) \(\def \p {\partial }\) \(\def \l {\left }\) \(\def \r {\right }\) \(\def \F {\mc {F}}\) \(\def \G {\mc {G}}\) \(\def \H {\mc {H}}\) \(\def \Om {\Omega }\) \(\def \om {\omega }\)

8.5 Kolmogorov’s 0-1 law \(\offsyl \)

This section is off-syllabus and is marked with a \(\offsyl \). It contains an intriguing fact about sequences of \(\sigma \)-fields, but it lives somewhere in between the material covered within our own course and MAS31002/61022 (Probability with Measure). It is mainly of interest to those taking MAS31002/61022 alongside this course, and we will not use it within our course, so it is best placed off-syllabus. It has a close connection to the second Borel-Cantelli lemma, which is introduced in MAS31002/61022.

Let \((\mc {F}_n)_{n\in \N }\) be a sequence of \(\sigma \)-fields, on the same probability space \((\Omega ,\mc {F},\P )\). The tail \(\sigma \)-field \(\mc {T}\) of \((\mc {F}_n)\) is defined by

\begin{equation} \label {eq:tail_sigma_field} \mc {T}=\bigcap _{n=1}^\infty \sigma (\mc {F}_n,\mc {F}_{n+1},\ldots ). \end{equation}

Note that \(\mc {T}\) is a \(\sigma \)-field by Lemma 2.1.5. The intuition here is that \(\sigma \)-field \(\mc {T}\) contains events that depend only on information ‘in the tail’ of the sequence \((\mc {F}_n)\). That is, if we have an event \(E\in \mc {T}\), then for any \(N\in \N \) we could tell whether \(E\) occurred by looking only at the occurrence of events \(E'\in \mc {F}_n\) for \(n\geq N\). For example, if \((X_n)\) is a sequence of random variables and \(\mc {F}_n=\sigma (X_n)\) then the almost sure limit \(X_n\stackrel {a.s.}{\to } X\), if it exists, will satisfy \(X\in m\mc {T}\).

The next result may appear surprising at first. The key point is that the \(\mc {F}_n\) are assumed to be independent, which means that they have no information in common. Consequently (8.2) implies that \(\mc {T}\) contains no information.

  • Theorem 8.5.1 (Kolmogorov’s 0-1 law) Let \((\mc {F}_n)\) be a sequence of independent \(\sigma \)-fields and let \(\mc {T}\) be the associated tail \(\sigma \)-field. If \(A\in \mc {T}\) then \(\P [A]=0\) or \(\P [A]=1\).

Proof: Let \(A\in \mc {T}\). Then \(A\in \sigma (\mc {F}_{n+1},\mc {F}_{n+2},\ldots )\) for all \(n\in \N \). This means that \(A\) is independent of \(\sigma (\mc {F}_1,\ldots ,\mc {F}_n)\), for all \(n\). It follows that \(A\) is independent of \(\sigma (\mc {F}_n\-n\in \N )\). However, from (8.2) we have that \(\mc {T}\sw \sigma (\mc {F}_n\-n\in \N )\), which means that \(A\) is independent of \(\mc {T}\). Hence \(A\) is independent of \(A\) (this is not a typo!), which means that \(\P [A]=\P [A\cap A]=\P [A]\P [A]=\P [A]^2\). The only solutions of the equation \(x^2=x\) are \(0\) and \(1\), hence \(\P [A]=0\) or \(\P [A]=1\).   ∎

  • Remark 8.5.2 Let us assume the same independence assumption as Theorem 8.5.1, and suppose that \(X\in m\mc {T}\). Then \(\{X\leq x\}\in \mc {T}\) for all \(x\in \R \), so Theorem 8.5.1 gives that \(\P [X\leq x]\) is either \(0\) or \(1\) for each \(x\in \R \). A bit of analysis shows that if we set \(c=\inf \{x\in \R \-\P [X\leq x]=1\}\) then in fact \(\P [X=c]=1\). Therefore, any random variable that is \(\mc {T}\) measurable is almost surely equal to a constant.