last updated: September 19, 2024

Stochastic Processes and Financial Mathematics
(part one)

\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\) \(\newcommand {\intertext }[1]{\text {#1}\notag \\}\) \(\let \Hat \hat \) \(\let \Check \check \) \(\let \Tilde \tilde \) \(\let \Acute \acute \) \(\let \Grave \grave \) \(\let \Dot \dot \) \(\let \Ddot \ddot \) \(\let \Breve \breve \) \(\let \Bar \bar \) \(\let \Vec \vec \) \(\DeclareMathOperator {\var }{var}\) \(\DeclareMathOperator {\cov }{cov}\) \(\def \ra {\Rightarrow }\) \(\def \to {\rightarrow }\) \(\def \iff {\Leftrightarrow }\) \(\def \sw {\subseteq }\) \(\def \wt {\widetilde }\) \(\def \mc {\mathcal }\) \(\def \mb {\mathbb }\) \(\def \sc {\setminus }\) \(\def \v {\textbf }\) \(\def \p {\partial }\) \(\def \E {\mb {E}}\) \(\def \P {\mb {P}}\) \(\def \R {\mb {R}}\) \(\def \C {\mb {C}}\) \(\def \N {\mb {N}}\) \(\def \Q {\mb {Q}}\) \(\def \Z {\mb {Z}}\) \(\def \B {\mb {B}}\) \(\def \~{\sim }\) \(\def \-{\,;\,}\) \(\def \|{\,|\,}\) \(\def \qed {$\blacksquare $}\) \(\def \1{\unicode {x1D7D9}}\) \(\def \cadlag {c\`{a}dl\`{a}g}\) \(\def \p {\partial }\) \(\def \l {\left }\) \(\def \r {\right }\) \(\def \F {\mc {F}}\) \(\def \G {\mc {G}}\) \(\def \H {\mc {H}}\) \(\def \Om {\Omega }\) \(\def \om {\omega }\)

9.5 In higher dimensions \(\offsyl \)

We will look briefly at the behaviour of simple symmetric random walks in higher dimensions. This section is off-syllabus and is included for interest. In particular, we will consider versions of Lemma 9.3.1 in dimension \(d=2\) and \(d\geq 3\).

We must first explain what the simple symmetric random walk is, in higher dimensions. It will suffice to describe \(d=2\) to make the principle clear. Then, for each \(n\in \N \), \(S_n\) is a random element of \(\Z ^2\), which we write as \(S_n=(S_{n}^{(1)},S_n^{(2)})\). The movements are given by \(S_n=\sum _{i=1}^n X_i\) where \(\P [X_i=(0,1)]=\P [X_i=(0,-1)]=\P [X_i=(1,0)]=\P [X_i=(0,-1)]=\frac 14\). These four possibilities correspond to movements by precisely one unit of space in the four possible directions: north, south, east and west.

Let \(R=\inf \{n\geq 1\-S_n=\v {0}\}\) denote the time taken to return to the origin \(\v {0}=(0,0)\). In two dimensions we have:

  • Lemma 9.5.1 If \(d=2\) then \(\P [R<\infty ]=1\).

Proof: We may use much the same argument as in Lemma 9.3.1. The only difference is that we must replace the estimate 9.9 with a two dimensional version, as follows. We have

\begin{align} \P [S_{2n}=(0,0)] &= \sum _{i=0}^n\binom {2n}{2i}\binom {2i}{i}\binom {2n-2i}{n-i} 4^{-2n} \notag \\ &= \binom {2n}{n}\sum _{k=0}^n\binom {n}{k}^2 4^{-2n} \notag \\ &= \l [\binom {2n}{n}2^{-2n}\r ]^2 \notag \\ &\sim \frac {1}{\pi n}. \label {eq:rw_return_time_d_2_eq_1} \end{align} In the above, the first line comes from counting the number of possible ways in which the two dimensional walk can start at \(0\) and returns to \(0\) after exactly \(2n\) steps (we did the one dimensional version in Exercise 7.11). To do so the walk must have made the same number of steps north as it did south, and the same number of steps east as it did west. We write \(2i\) for the number of east-west steps and \(2j=2n-2i\) for the number of north-south steps. The first binomial factor counts which steps are east-west and which are north-south. The second and third, respectively, count which of the east-west steps are to the east, and which of the north-south steps are to the north – and that fixes everything else about the path.

The second line of the above calculation follows easily from the first, and you should write the factorials out in full to check this for yourself. The third line follows from the second by the combinatorical identity \(\binom {2n}{n}=\sum _{k=0}^n \binom {n}{k}\). The final line follows from the same calculation as in the right hand side of (9.9).

With (9.12) in place of (9.9), the rest of the argument for this lemma follows in the same way as that of Lemma 9.3.1. We’ll omit writing out the details again.   ∎

We won’t prove it here, but the simple symmetric random walk will take much longer to return to the origin in \(d=2\) than \(d=1\). This is because in order to reach \((0,0)\) both the east-west component \(S^{(1)}_n\) and the north-south component \(S^{(2)}_n\) must return simultaneously to zero, and they unlikely to do this at the same time. Perhaps surprisingly, in two dimensions the random walk still manages to return. The picture changes if we move to three dimensions, with the obvious notation: now \(S_n\in \Z ^3\) and there are 6 different directions to move in, each with probability \(\frac 16\). In three dimensions there is now so much space to get lost in that the random walk might not return to the origin.

  • Lemma 9.5.2 If \(d=3\) then \(\P [R<\infty ]<1\).

Sketch of Proof: We will only give a sketch of the argument. We need to make some substantial modifications to the argument used in Lemmas 9.3.1 and 9.5.1, because we are now trying to prove the opposite result, but there are still parts that overlap. In similar style to (9.12) we have

\begin{align*} \P [S_{2n}=(0,0,0)] &= \sum _{i+j+k=n}\frac {(2n)!}{(i!)^2(j!)^2(k!)^2}6^{-2n} \\ &= \binom {2n}{n} 2^{-2n} \sum _{i+j+k=n} \l (\frac {n!}{i!j!k!}3^{-n}\r )^2 \\ &\leq \binom {2n}{n} 2^{-2n} \l (\max _{i+j+k=n} \frac {n!}{i!j!k!}3^{-n}\r )\l (\sum _{i+j+k=n} \frac {n!}{i!j!k!}3^{-n}\r ). \end{align*} The first line again concerns counting possible ways to return to the origin in exactly \(2n\) steps (we’ll omit the details of this) and the next two lines follow by elementary calculations. We have that \(\sum _{i+j+k=n} \frac {n!}{i!j!k!}3^{-n}=1\), because this sums up all the probabilities of outcomes \((i,j,k)\) that sum to \(n\) according to the trinomial distribution (which you might have to look up, if you haven’t seen it before). A bit of calculus with Lagrange multipliers shows that the term \(\frac {n!}{i!j!k!}3^{-n}\), subject to the condition \(i+j+k=n\), is maximised when \(i,j,k\) are all approximately equal to \(\frac {n}{3}\). Approximating \(\frac {n}{3}\) by the nearest integer, we can find that \(\max _{i+j+k=n} \frac {n!}{i!j!k!}3^{-n}\leq \frac {C}{n}\), where \(C\in (0,\infty )\) is a constant. Combining this with the calculation from (9.9) we obtain that \(\P [S_{2n}=\v {0}]\leq \frac {C'}{n^{3/2}}\), where \(C'\in (0,\infty )\) is also constant.

Setting \(G=\sum _{n=1}^\infty \1\{S_n=\v {0}\}\) we now have

\[\E [G]=\sum _{n=1}^\infty \P [S_{2n}=\v {0}]\leq \sum _{n=1}^\infty \frac {C'}{n^{3/2}}<\infty .\]

Hence the expected number of visits to the origin is finite. By the strong Markov property, if \(\P [R<\infty ]=1\) then we would almost surely have infinitely many returns to the origin. Hence \(\P [R<\infty ]<1.\)   ∎

Lemma 9.5.2 also holds in all dimensions \(d\geq 3\), To see this, note that the first three coordinates of a \(d\)-dimensional random walk give us a three dimensional random walk. If the \(d\)-dimensional walk visits the origin, so do its first three coordinates, but Lemma 9.5.2 gives that this has probability less than one.

It is natural to ask what the random walk does do, in three dimensions and higher, as \(n\to \infty \). The answer is easy to guess: it disappears off to infinity and does not come back i.e. \(|S_n|\stackrel {a.s.}{\to }\infty \). This is left for you to prove, as Exercise 9.9.