last updated: October 16, 2024

Stochastic Processes and Financial Mathematics
(part one)

\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\) \(\newcommand {\intertext }[1]{\text {#1}\notag \\}\) \(\let \Hat \hat \) \(\let \Check \check \) \(\let \Tilde \tilde \) \(\let \Acute \acute \) \(\let \Grave \grave \) \(\let \Dot \dot \) \(\let \Ddot \ddot \) \(\let \Breve \breve \) \(\let \Bar \bar \) \(\let \Vec \vec \) \(\DeclareMathOperator {\var }{var}\) \(\DeclareMathOperator {\cov }{cov}\) \(\def \ra {\Rightarrow }\) \(\def \to {\rightarrow }\) \(\def \iff {\Leftrightarrow }\) \(\def \sw {\subseteq }\) \(\def \wt {\widetilde }\) \(\def \mc {\mathcal }\) \(\def \mb {\mathbb }\) \(\def \sc {\setminus }\) \(\def \v {\textbf }\) \(\def \p {\partial }\) \(\def \E {\mb {E}}\) \(\def \P {\mb {P}}\) \(\def \R {\mb {R}}\) \(\def \C {\mb {C}}\) \(\def \N {\mb {N}}\) \(\def \Q {\mb {Q}}\) \(\def \Z {\mb {Z}}\) \(\def \B {\mb {B}}\) \(\def \~{\sim }\) \(\def \-{\,;\,}\) \(\def \|{\,|\,}\) \(\def \qed {$\blacksquare $}\) \(\def \1{\unicode {x1D7D9}}\) \(\def \cadlag {c\`{a}dl\`{a}g}\) \(\def \p {\partial }\) \(\def \l {\left }\) \(\def \r {\right }\) \(\def \F {\mc {F}}\) \(\def \G {\mc {G}}\) \(\def \H {\mc {H}}\) \(\def \Om {\Omega }\) \(\def \om {\omega }\)

9.3 Long term behaviour: symmetric case \(\msconly \)

In this section we study the simple symmetric random walk. Our goal is to show that \((S_n)\) not only oscillates as \(n\to \infty \), but in fact \((S_n)\) visits every \(k\in Z\) infinitely many times. This implies that \((S_n)\) will oscillate wildly, with ever larger oscillations, as \(n\to \infty \).

Let \(S_n=\sum _{i=1}^nX_i\) where the \((X_i)\) are independent and have identical distribution \(\P [X_i=1]=\P [X_1=-1]=\frac 12\). Note that \(S_0=0\). For \(k\in \Z \) let

\[T_k=\inf \{n\geq 0\-S_n=k\}\]

be the hitting time of \(k\in \N \) and let

\[R=\inf \{n\geq 1\-S_n=0\}\]

be the first return time to zero. We set \(R=\infty \) if the walk does not return to zero, in keeping with the convention that \(\inf \emptyset =\infty \). In Example 8.2.2 we showed that \(T_k\) is a stopping time, and in Exercise 8.5 we showed that \(R\) is a stopping time.

  • Lemma 9.3.1 It holds that \(\P [R<\infty ]=1\).

Proof: Note that \(\P [R<\infty ]\in [0,1]\). We will argue by contradiction. Suppose that \(\P [R<\infty ]=p<1.\) By the strong Markov property, the process \((S_{R+n})_{n\geq 0}\) has the same distribution as \((S_n)\). In words, each time the random walk returns to zero the process restarts, and from then on acts like a simple symmetric random walk, independent of the past. Let \(G=\sum _{n=1}^\infty \1\{S_n=0\}\), which in words counts the number of times \((S_n)\) returns to zero. By the strong Markov property, applied repeatedly at each return, \(G\) has a geometric distribution distribution \(\P [G=j]=p^j(1-p)\) for all \(j\in \{0,1,\ldots ,\}\). In particular, \(\E [G]<\infty \).

We will now calculate \(\E [G]\) using a different method. Note that \(G=\sum _{n=1}^\infty \1\{S_n=0\}\), hence

\begin{equation} \label {eq:E_returns_ssrw} \E [G]=\sum _{n=1}^\infty \E [\1\{S_n=0\}]=\sum _{n=1}^\infty \P [S_n=0]. \end{equation}

In the first equality above we have used Exercise 6.8 to swap the infinite sum and expectation. Calculating \(\P [S_n=0]\) is Exercise 7.11; it is zero when \(n\) is odd, which leaves us only to consider

\begin{equation} \label {eq:rw_stirling_1} \P [S_{2n}=0] \;=\; \binom {2n}{n}2^{-2n} \;=\; \frac {(2n)!}{(n!)^2}2^{-2n} \;\sim \; \frac {\sqrt {4\pi n}{(2n/e)^{2n}}}{2\pi n (n/e)^{2n}} 2^{-2n} \;=\; \frac {1}{\sqrt {\pi n}}. \end{equation}

In the above we have used Stirling’s approximation, in particular equation (9.7), to simplify the binomial coefficients as \(n\to \infty \). From (9.9) we obtain that \(\sqrt {\pi n}\P [S_{2n}=0]\to 1\) as \(n\to \infty \). Hence there exists some \(N\in \N \) such that for all \(n\geq N\) we have \(\P [S_{2n}=0]\geq \frac {1}{2\sqrt {\pi n}}\). Putting this back into (9.8) we have

\[\E [G] \geq \sum _{n=N}^\infty \P [S_{2n}=0] \geq \sum _{n=N}^\infty \frac {1}{2\sqrt {\pi n}}=\infty .\]

We had deduced above that \(\E [G]\) was finite, so we have reached a contradiction. Hence in fact \(\P [R<\infty ]=1\).   ∎

  • Lemma 9.3.2 For all \(k\in \N \) we have that \(\P [T_k<\infty ]=1\).

Proof: Note that \(T_0=0\), because \(S_0=0\). We next consider \(T_1\). From Lemma 9.3.1 we have \(\P [R<\infty ]=1\), hence by the strong Markov property (applied at successive returns to zero) we have that \((S_n)_{n=0}^\infty \) visits zero infinitely many times.

At after each visit to zero, by the strong Markov property, the next move of the walk will be to \(1\) with probability \(\frac 12\), and to \(-1\) with probability \(\frac 12\). Each time, the move will be independent of the past. Hence, we need to wait a Geometric(\(\frac 12\)) number of times to see a movement to \(1\). Since this only requires finitely many attempts, we have \(\P [T_1<\infty ]=1\).

Let us now consider \(T_k\) for \(k\in \N \). To reach \(k\) we need to move from \(0\) to \(1\), then from \(1\) to \(2\), and so on until \(k\). By the strong Markov property applied successively at \(T_1\), then \(T_2\), then \(T_3\) and so on, each such move takes the same number of steps (in distribution) as an independent copy of \(T_1\). Hence the distribution of \(T_k\) is that of \(k\) i.i.d. copies of \(T_1\). Since \(\P [T_1<\infty ]=1\) it follows immediately that \(\P [T_k<\infty ]=1\).

Lastly, we consider \(k<0\). By symmetry of the walk about zero, \(T_k\) and \(T_{-k}\) have the same distribution, so \(\P [T_{-k}<\infty ]=\P [T_k<\infty ]=1\).   ∎

  • Theorem 9.3.3 Almost surely, for all \(k\in \Z \) there are infinitely many \(n\in \N \) such that \(S_n=k\).

Proof: Since there are only countably many \(k\in \Z \), it suffices to prove the result for some fixed (but arbitrary) \(k\in \Z \). From Lemma 9.3.2 we have \(\P [T_k<\infty ]=1\), so almost surely the walk reaches \(k\), in fact \(S_{T_k}=k\). By the strong Markov property, after time \(T_k\) the walk continues to behave like a simple symmetric random walk, independent of its past. Lemma 9.3.1 gives that \(\P [R<\infty ]=1\), meaning that the walk will almost surely return (again) to \(k\). Repeating this argument, we obtain that almost surely the walk visits \(k\) infinitely many times.   ∎

How long does the simple symmetric random walk take to reach \(1\)?

Let us focus for a moment on the time \(T_1\). Even though we now know that \(\P [T_1<\infty ]=1\), we are not able to apply the optional stopping theorem to \((S_n)\) and \(T_1\), because \((S_n)\) is unbounded and we do not know if \(\E [T_1]<\infty \). In fact, in this case \(\E [T_1]=\infty \) and the optional stopping theorem does not apply. Instead, we can argue by contradiction and use the optional stopping theorem to deduce that \(\E [T_1]=\infty \), as follows.

  • Lemma 9.3.4 It holds that \(\E [T_1]=\infty \).

Proof: Suppose that \(\E [T_1]<\infty \). Then we could apply the optional stopping theorem to \((S_n)\) and \(T\) using the condition (c). This would give \(\E [S_0]=0=\E [S_{T_1}]\). But, from Lemma 9.3.2 we have \(\P [T_1<\infty ]=1\) which means, by definition of \(T_1\), that \(S_{T_1}=1\) almost surely. This is a contradiction.   ∎

  • Remark 9.3.5 More generally, we have to be very careful about using the optional stopping theorem. The conditions fail in many situations, and there are many examples of stopping times \(T\) and martingales \((M_n)\) for which \(\E [M_T]\neq \E [M_0]\).

From Lemmas 9.3.2 and 9.3.4 we have that \(\P [T_1<\infty =1]\) but \(\E [T_1]=\infty \). This tells us that although the time \(T_1\) is always finite, it can be very large. You might find this counter-intuitive at first glance since it is clear that \(\P [T_1=1]=\frac 12\) by just considering the first step, but the point is that once the walk has (by chance) made a few steps downwards, it starts to take a very long time to find its way back up again. In the next lemma, with a careful calculation we will find out the exact distribution of \(T_1\). Note that \(T_1\) is odd, as the value of \((S_n)\) is odd/even depending on whether \(n\) is odd/even.

  • Lemma 9.3.6 It holds that \(\P [T_1=2n-1]=\binom {2n}{n}2^{-2n}\frac {1}{2n-1}\sim \tfrac {1}{2n\sqrt {\pi n}}\).

Proof: For \(n=1\) we have \(\P [T_1=1]=\P [S_1=1]=\frac 12\), as required. It remains to consider \(n\geq 2\). Note that

\begin{align} \P [S_{2n}>0] &= \P [T_1\leq 2n-1, S_{2n}>0] \notag \\ &= \E \l [\1_{\{T_1\leq 2n-1\}}\1_{\{S_{2n}>0\}}\r ] \notag \\ &= \E \l [\E \l [\1_{\{T_1\leq 2n-1\}}\1_{\{S_{2n}>0\}}\|\mc {F}_{T_1}\r ]\r ] \notag \\ &= \E \l [\1_{\{T_1\leq 2n-1\}}\E \l [\1_{\{S_{2n}-S_{T_1}>0\}}\|\mc {F}_{T_1}\r ]\r ] \notag \\ &= \E \l [\1_{\{T_1\leq 2n-1\}}\times \frac 12\r ] \notag \\ &= \frac 12 \P [T_1\leq 2n-1]. \label {eq:T1S2n} \end{align} In the above, the first line follows because if \(S_{2n}>0\) then \(T_1\leq 2n-1\). The second and third lines use the relationship between expectation and conditional expectation, and the fourth line uses that \(T_1\) is a stopping time, hence \(\{T_1\leq 2n-1\}\in \mc {F}_{T_1}\). The fifth line uses the strong Markov property, which gives that \(S_{2n}-S_{T_1}\) is independent of \(\mc {F}_{T_1}\) and behaves like a random walk with \(2n-T_1\) steps. Since \(T_1\) is odd, this is an odd number of steps, which means that \(S_{2n}-S_{T_1}\) takes odd values and has a distribution that is symmetric about \(0\), hence \(\E [\1_{\{S_{2n}-S_{T_1}>0\}}\|\mc {F}_{T_1}]=\frac 12\).

From the above calculation, and using the symmetry of the random walk about zero, we have that

\begin{equation} \label {eq:rw_sym_T1_dist_1} \P [T\leq 2n-1]=2\P [S_{2n}>0]=\P [S_{2n}>0]+\P [S_{2n}<0]=1-\P [S_{2n}=0]. \end{equation}

To finish, we note that for \(n\geq 2\),

\begin{align*} \P [T=2n-1] &= \P [T\leq 2n-1]-\P [T\leq 2n-3] \\ &= (1-\P [S_{2n}=0]) - (1-\P [S_{2n-2}=0]) \\ &= \binom {2n-2}{n-1}2^{-2n+2}-\binom {2n}{n} 2^{-2n} \\ &= \binom {2n}{n} 2^{-2n}\frac {1}{2n-1}. \end{align*} The second line of the above follows from (9.11), and the last line follows from a short calculation that is left for you. The last claim of the lemma follows from the same calculation as in (9.9).   ∎

The calculation that led to (9.10) is known as a ‘reflection principle’. It applies the strong Markov property at \(T_k\) (with \(k=1\), in this case), and then divides the future into two cases based on whether the first move is upwards of downwards. The distribution of the walk in these two cases is a mirror of one another i.e. a reflection about height \(k\).