last updated: October 16, 2024

Stochastic Processes and Financial Mathematics
(part one)

\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\) \(\newcommand {\intertext }[1]{\text {#1}\notag \\}\) \(\let \Hat \hat \) \(\let \Check \check \) \(\let \Tilde \tilde \) \(\let \Acute \acute \) \(\let \Grave \grave \) \(\let \Dot \dot \) \(\let \Ddot \ddot \) \(\let \Breve \breve \) \(\let \Bar \bar \) \(\let \Vec \vec \) \(\DeclareMathOperator {\var }{var}\) \(\DeclareMathOperator {\cov }{cov}\) \(\def \ra {\Rightarrow }\) \(\def \to {\rightarrow }\) \(\def \iff {\Leftrightarrow }\) \(\def \sw {\subseteq }\) \(\def \wt {\widetilde }\) \(\def \mc {\mathcal }\) \(\def \mb {\mathbb }\) \(\def \sc {\setminus }\) \(\def \v {\textbf }\) \(\def \p {\partial }\) \(\def \E {\mb {E}}\) \(\def \P {\mb {P}}\) \(\def \R {\mb {R}}\) \(\def \C {\mb {C}}\) \(\def \N {\mb {N}}\) \(\def \Q {\mb {Q}}\) \(\def \Z {\mb {Z}}\) \(\def \B {\mb {B}}\) \(\def \~{\sim }\) \(\def \-{\,;\,}\) \(\def \|{\,|\,}\) \(\def \qed {$\blacksquare $}\) \(\def \1{\unicode {x1D7D9}}\) \(\def \cadlag {c\`{a}dl\`{a}g}\) \(\def \p {\partial }\) \(\def \l {\left }\) \(\def \r {\right }\) \(\def \F {\mc {F}}\) \(\def \G {\mc {G}}\) \(\def \H {\mc {H}}\) \(\def \Om {\Omega }\) \(\def \om {\omega }\)

8.2 The optional stopping theorem \(\msconly \)

Adaptedness, from Definition 3.3.2, captures the idea that we cannot see into the future. This leaves us with the question: as time passes, in what circumstances can we tell if an event has already occurred?

  • Definition 8.2.1 A map \(T:\Om \to \{0,1,2,\ldots ,\infty \}\) is called a \((\mc {F}_n)\) stopping time if, for all \(n\), the event \(\{T\leq n\}\) is \(\F _n\) measurable.

A stopping time is a random time with the property that, if we have only information from \(\mc {F}_n\) accessible to us at time \(n\), we are able to decide at any \(n\) whether or not \(T\) has already happened. It is straightforward to check that \(T\) is a stopping time if and only if \(\{T=n\}\) is \(\mc {F}_n\) measurable for all \(n\). This is left for you as exercise 8.4.

  • Example 8.2.2 Let \(S_n=\sum _{i=1}^nX_i\) be the simple symmetric random walk, with \(\P [X_i=1]=\P [X_i=-1]=\frac {1}{2}\). Let \(\mc {F}_n=\sigma (X_1,\ldots ,X_n)\). Then, for any \(a\in \N \), the time

    \[T=\inf \{n\geq 0\-S_n=a\},\]

    which is the first time \(S_n\) takes the value \(a\), is a stopping time. It is commonly called the ‘hitting time’ of \(a\). To see that \(T\) is a stopping time we note that

    \begin{align*} \{T\leq n\} =\{\text {for some }i\leq n\text { we have }S_i=a\} =\bigcup _{i=0}^n\{S_i=a\} =\bigcup _{i=0}^nS_i^{-1}(a) \end{align*} Since \(S_i\) is \(\mc {F}_n\) measurable for all \(i\leq n\), the above equation shows that \(\{T\leq n\}\in \mc {F}_n\).

You will have seen hitting times before – for most of you, in MAS2003 in the context of Markov chains. You might also recall that the random walk \(S_n\) can be represented as Markov chains on \(\Z \). You will probably have looked at a method for calculating expected hitting times by solving systems of linear equations. This approach is feasible if the state space of a Markov chain is finite; however \(\Z \) is infinite. In this section we look at an alternative method, based on martingales.

Recall the notation \(\wedge \) and \(\vee \) that we introduced at the start of this chapter. That is, \(\min (s,t)=s\wedge t\) and \(\max (s,t)=s\vee t\).

  • Lemma 8.2.3 Let \(S\) and \(T\) be stopping times with respect to the filtration \((\mc {F}_n)\). Then \(S\wedge T\) is also a \((\mc {F}_n)\) stopping time.

Proof: Note that

\[\{S\wedge T\leq n\}=\{S\leq n\}\cup \{T\leq n\}.\]

Since \(S\) and \(T\) are stopping times, both the sets on the right hand side of the above are events in the \(\sigma \)-field \(\mc {F}_n\). Hence, the event on the left hand side is also in \(\mc {F}_n\).   ∎

If \(T\) is a stopping time and \(M\) is a stochastic process, we define \(M^T\) to be the process

\[M^T_n=M_{n\wedge T}.\]

Here \(a\wedge b\) denotes the minimum of \(a\) and \(b\). To be precise, this means that \(M^T_n(\omega )=M_{n\wedge T(\omega )}(\omega )\) for all \(\omega \in \Omega \). In Example 8.2.2, \(S^T\) would be the random walk \(S\) which is stopped (i.e. never moves again) when (if!) it reaches state \(a\).

  • Lemma 8.2.4 Let \(M_n\) be a martingale (resp. supmartingale, supermartingale) and let \(T\) be a stopping time. Then \(M^T\) is also a martingale (resp. supmartingale, supermartingale).

Proof: Let \(C_n:=\1\{T-1\ge n\}\). Note that \(\{T-1\ge n\}=\Omega \sc \{T\le n-1\}\), so \(\{T\ge n\}\in \mc {F}_{n}\). By Lemma 2.4.2 \(C_n\in m\mc {F}_{n-1}\). That is, \((C_n)\) is an adapted process. Moreover,

\[ (C\circ M)_n = \sum _{k=1}^n \1_{k-1\le T-1} (M_k-M_{k-1}) = \sum _{k=1}^n \1_{k\le T} (M_k-M_{k-1}) = \sum _{k=1}^{n\wedge T} (M_k-M_{k-1}) = M_{T\wedge n}-M_{0}. \]

The last equality holds because the sum is telescoping (i.e. the middle terms all cancel each other out). Hence, by Theorem 7.1.1, if \(M\) is a martingale (resp. submartingale, supermartingale), \(C\circ M\) is also a martingale (resp. submartingale, supermartingale).   ∎

  • Theorem 8.2.5 (Doob’s Optional Stopping Theorem) Let \(M\) be martingale (resp. submartingale, supermartingale) and let \(T\) be a stopping time. Then

    \[\E [M_T]=\E [M_0]\]

    (resp. \(\geq \), \(\leq \)) if any one of the following conditions hold:

    • (a) \(T\) is bounded.

    • (b) \(\P [T<\infty ]=1\) and there exists \(c\in \R \) such that \(|M_n|\leq c\) for all \(n\).

    • (c) \(\E [T]<\infty \) and there exists \(c\in \R \) such that \(|M_n-M_{n-1}|\leq c\) for all \(n\).

Proof: We’ll prove this for the supermartingale case. The submartingale case then follows by considering \(-M\), and the martingale case follows since martingales are both supermartingales and submartingales.

Note that

\begin{equation} \label {eq:opt_stop_pre} \E [M_{n\wedge T}-M_0]\le 0, \end{equation}

because \(M^T\) is a supermartingale, by Lemma 8.2.4. For (a), we take \(n=\sup _\omega T(\omega )\) and the conclusion follows.

For (b), we use the dominated convergence theorem to let \(n\to \infty \) in (8.1). As \(n\to \infty \), almost surely \(n\wedge T(\omega )\) is eventually equal to \(T(\omega )\) (because \(\P [T<\infty ]=1\)), so \(M_{n\wedge T}\to M_T\) almost surely. Since \(M\) is bounded, \(M_{n\wedge T}\) and \(M_T\) are also bounded. So \(\E [M_{n\wedge T}]\to \E [M_T]\) and taking limits in (8.1) obtains \(\E [M_T-M_0]\leq 0\), which in turn implies that \(\E [M_T]\leq \E [M_0]\).

For (c), we will also use the dominated convergence theorem to let \(n\to \infty \) in (8.1), but now we need a different way to check its conditions. We observe that

\[ |M_{n\wedge T}-M_0| =\left |\sum _{k=1}^{n\wedge T} (M_k-M_{k-1}) \right | \le T\;\sup _{n\in \N } |M_n-M_{n-1}|. \]

Since \(\E [T(\sup _n |M_n-M_{n-1}|)]\leq c\E [T]<\infty \), we can use the Dominated Convergence Theorem to let \(n\to \infty \), and the results follows as in (b).   ∎

These three sets of conditions (a)-(c) for the optional stopping theorem are listed on the formula sheet, in Appendix B. Sometimes none of them apply! See Remark 9.3.5 and the lemma above it for a warning example.