Stochastic Processes and Financial Mathematics
(part one)
Chapter 9 Simple random walks \(\msconly \)
In Section 7.4 we studied the long-term behaviour of the Polya urn and the Galton-Watson process, by taking a limit as the time parameter tended to infinity of a suitably chosen martingale. The following result shows that these techniques do not (or at least, not directly) explain how random walks might behave. In this chapter we will focus on the case of simple random walks, which means that they move precisely one unit of space, upwards or downwards, in each step of time.
Proof: The process \((S_n)\) is integer valued. From Lemma 7.4.1 we have that, if \((S_n)\) were to converge almost surely, it would have to eventually become constant. By definition we have \(|S_{n+1}-S_n|=1\), so this cannot happen, hence \((S_n)\) does not convergence almost surely.
In the case \(p=\frac 12\) we have that \((S_n)\) is a martingale. If \((S_n)\) was also uniformly bounded in \(L^1\) then the martingale convergence theorem would imply almost sure convergence, so from what we have already proved \((S_n)\) is not uniformly bounded in \(L^1\). For the case \(p\neq \frac 12\), we have shown in (4.3) that \(M_n=S_n-n(p-q)\) is a martingale, hence \(\E [S_n]=n(p-q)\) and therefore \(|\E [S_n]|\to \infty \). By the absolute values property of expectations \(|E[S_n]|\leq \E [|S_n|]\), hence \(\E [|S_n|]\to \infty \), so \((S_n)\) is not uniformly bounded in \(L^1\). ∎
In fact, as we mentioned briefly (and without proof) in Section 7.4, random walks exhibit a mixture of oscillatory behaviour and divergence to \(\pm \infty \). We will study this behaviour here, as well as considering some other closely related properties of random walks. The techniques we will use are primarily those developed in Chapter 8.
This chapter will make use of some of the exercises from earlier chapters, including within the proofs. Make sure that you study these too, when they appear, if you have not done so already. As a reminder: the solutions to all of the end-of-chapter exercises can be found within the online version of the lecture notes.
9.1 Exit probabilities \(\msconly \)
We recall the asymmetric random walk from Section 4.1. Let \((X_i)_{i=1}^\infty \) be a sequence of i.i.d. random variables. Let \(p+q=1\) with \(p,q\in (0,1)\), \(p\neq q\) and suppose that
\[\P [X_i=1]=p,\hspace {1pc}\P [X_i=-1]=q.\]
The asymmetric random walk is the stochastic process
\[S_n=\sum \limits _{i=1}^n X_i.\]
Set \(\mc {F}_n=\sigma (X_1,\ldots ,X_n)\) and note that \((\mc {F}_n)\) is the natural filtration of \(S_n\). Recall that we showed in Section 4.1 that the stochastic process \(S_n-n(p-q)\) was a martingale. In Exercise 4.2 we found another martingale associated to \((S_n)\), in particular we showed that
\[M_n = (q/p)^{S_n}\]
is a martingale.
Our plan is to use the optional stopping theorem, applied to the martingale \((M_n)\), to obtain information about the hitting times of the asymmetric random walk. Let \(T_a=\inf \{n: S_n=a\}\) and \(T=T_a \wedge T_b\) for integer \(a<0<b\). We aim to calculate \(\P [T=T_a]\) and \(\P [T=T_b]\). We can show that \(T_a\) is a stopping time by noting that
\[\{T_a\leq n\}=\bigcup _{i=0}^n \{S_i\leq a\}.\]
Similarly, \(T_b\) is a stopping time and it follows from Lemma 8.2.3 that \(T\) is also a stopping time.
We now look to apply the optional stopping theorem, using the (b) conditions. For this, we’ll need the following lemma.
Proof: Let \(m=b-a\). Divide up the random variables \((X_n)\) into sets \(A_1,A_2,\ldots \) as follows:
\(\seteqnumber{0}{9.}{0}\)\begin{equation} \overbrace {X_1,X_2,X_3,\ldots ,X_m}^{A_1},\,\overbrace {X_{m+1},X_{m+2},\ldots ,X_{2m}}^{A_2},\,\overbrace {X_{2m+1},X_{2m+2},\ldots ,X_{3m}}^{A_3},\,\overbrace {X_{3m+1},\ldots }^{\ldots }. \end{equation}
So as each \(A_k\) contains precisely \(m\) of the \(X_i\)s.
Let \(E_k=\{\text {for all }X\in A_k,\,X=1\}\) be the event that all random variables in \(A_k\) are equal to one. Note that if the event \(E_k\) occurs then our random walk moves up by \(m\) steps, during time \((k-1)m,\ldots ,km\).
If \(T\) has not happened before time \((k-1)m\) then \(a<S_{(k-1)m}<b\). Then, if the event \(E_k\) then occurs, we will have \(S_{km}\geq a\) and hence \(T\leq km\). This is best illustrated with a picture:
We can think of the sequence of events \(E_1,E_2,\ldots \) as one chance after another that our random walker has to exit \([a,b]\).
By independence, \(\P [E_k]=p^m\). Hence, the random variable \(K=\inf \{k\in \N \-E_k\text { occurs}\}\) is a geometric random variable with success parameter \(p^m\). This means that \(K<\infty \) almost surely and that \(\E [K]=p^{-m}<\infty \). By definition of \(K\), the event \(E_K\) occurs so \(T\leq Km\) and by monotonicity of \(\E \) we have \(\E [T]\leq m\E [K]<\infty \). ∎
From above, we have that \(M\) is a martingale. Hence by Lemma 8.2.4, \(M^T\) is also a martingale. By definition of \(T\),
\(\seteqnumber{0}{9.}{1}\)\begin{align*} \text {if $q>p$ then }&(q/p)^a\leq M^T_n\leq (q/p)^b \\ \text {if $p>q$ then }&(q/p)^a\geq M^T_n\geq (q/p)^b \end{align*} for all \(n\), hence \(M^T\) is a bounded martingale. Lemma 9.1.1 implies that \(\P [T<\infty ]=1\), so we have that condition (b) of the optional stopping theorem holds for the martingale \(M^T\) and the stopping time \(T\). Therefore,
\[\E [M^T_T]=\E [M^T_0]\]
but \(M^T_T=M_{T\wedge T}=M_T\) and \(M^T_0=M_{0\wedge T}=M_0=1\). So we have
\[\E [M_T]=1.\]
Our next aim is to calculate the probabilities \(\P [T=T_a]\) and \(\P [T=T_b]\). That is, we want to know which of the two boundaries \(a\) and \(b\) we actually hit at time \(T\) (for example, \(\{T=T_a\}=\{S_T=a\}\) is the event that we hit \(a\) at time \(T\)).
Since \(\P [T<\infty ]=1\), we must hit one or other boundary, so we have that
\(\seteqnumber{0}{9.}{1}\)\begin{equation} \label {eq:asym_rw_stopped_lin_1} \P [T=T_a]+\P [T=T_b]=1. \end{equation}
By partitioning the expectation \(\E [M_T]\) on whether \(\{T=T_a\}\), we have
\(\seteqnumber{0}{9.}{2}\)\begin{align} 1&=\E [M_T]\\ &= \E [M_T\1\{T=T_a\}]+\E [M_T\1\{T=T_b\}] \notag \\ &= \E \l [\l (\frac {q}{p}\r )^a\1\{T=T_a\}\r ]+\E \l [\l (\frac {q}{p}\r )^b\1\{T=T_b\}\r ] \notag \\ &= \P [T=T_a] \l (\frac {q}{p}\r )^a + \P [T=T_b] \l (\frac {q}{p}\r )^b. \label {eq:asym_rw_stopped_lin_2} \end{align} Solving the linear equations (9.2) and (9.4), recalling that \(p\neq q\), gives that
\(\seteqnumber{0}{9.}{4}\)\begin{equation} \label {eq:asym_rw_stopped_1} \P [T=T_a]=\frac {(q/p)^b-1}{(q/p)^b-(q/p)^a}. \end{equation}
and therefore also
\(\seteqnumber{0}{9.}{5}\)\begin{equation} \label {eq:asym_rw_stopped_2} \P [T_b=T]=1-\P [T=T_a]=\frac {1-(q/p)^a}{(q/p)^b-(q/p)^a}. \end{equation}
Note that this formula does not make sense if \(p=q=\frac 12\). In that case our two linear equations above are in fact the same equation, so we cannot solve them to find \(\P [T=T_a]\) and \(\P [T=T_b]\). However, all is not lost: the case of the symmetric random walk can be handled in similar style to above, but we need a different martingale in place of \((M_n)\). This is left for you to do, with some hints along the way, in Exercise 9.2