last updated: October 16, 2024

Stochastic Processes and Financial Mathematics
(part two)

\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\) \(\newcommand {\intertext }[1]{\text {#1}\notag \\}\) \(\let \Hat \hat \) \(\let \Check \check \) \(\let \Tilde \tilde \) \(\let \Acute \acute \) \(\let \Grave \grave \) \(\let \Dot \dot \) \(\let \Ddot \ddot \) \(\let \Breve \breve \) \(\let \Bar \bar \) \(\let \Vec \vec \) \(\DeclareMathOperator {\var }{var}\) \(\DeclareMathOperator {\cov }{cov}\) \(\DeclareMathOperator {\indeg }{deg_{in}}\) \(\DeclareMathOperator {\outdeg }{deg_{out}}\) \(\newcommand {\nN }{n \in \mathbb {N}}\) \(\newcommand {\Br }{{\cal B}(\R )}\) \(\newcommand {\F }{{\cal F}}\) \(\newcommand {\ds }{\displaystyle }\) \(\newcommand {\st }{\stackrel {d}{=}}\) \(\newcommand {\uc }{\stackrel {uc}{\rightarrow }}\) \(\newcommand {\la }{\langle }\) \(\newcommand {\ra }{\rangle }\) \(\newcommand {\li }{\liminf _{n \rightarrow \infty }}\) \(\newcommand {\ls }{\limsup _{n \rightarrow \infty }}\) \(\newcommand {\limn }{\lim _{n \rightarrow \infty }}\) \(\def \ra {\Rightarrow }\) \(\def \to {\rightarrow }\) \(\def \iff {\Leftrightarrow }\) \(\def \sw {\subseteq }\) \(\def \wt {\widetilde }\) \(\def \mc {\mathcal }\) \(\def \mb {\mathbb }\) \(\def \sc {\setminus }\) \(\def \v {\textbf }\) \(\def \p {\partial }\) \(\def \E {\mb {E}}\) \(\def \P {\mb {P}}\) \(\def \R {\mb {R}}\) \(\def \C {\mb {C}}\) \(\def \N {\mb {N}}\) \(\def \Q {\mb {Q}}\) \(\def \Z {\mb {Z}}\) \(\def \B {\mb {B}}\) \(\def \~{\sim }\) \(\def \-{\,;\,}\) \(\def \|{\,|\,}\) \(\def \qed {$\blacksquare $}\) \(\def \1{\unicode {x1D7D9}}\) \(\def \cadlag {c\`{a}dl\`{a}g}\) \(\def \p {\partial }\) \(\def \l {\left }\) \(\def \r {\right }\) \(\def \F {\mc {F}}\) \(\def \G {\mc {G}}\) \(\def \H {\mc {H}}\) \(\def \Om {\Omega }\) \(\def \om {\omega }\) \(\def \Vega {\mc {V}}\)

14.2 The Markov property

The notation \(\E _{t,x}[\ldots ]\) from Section 14.1, along with conditional expectation, allows us to formally express one of the most useful concepts in probability theory: the Markov property.

Those of you taking the MAS61023 version of the course have already studied a more extensive treatment of the Markov property, in discrete time, in Section 8.4. We’ll use slightly different notation here.

The idea of the Markov property is the following. Suppose that we have a stochastic process \((X_t)\), and that we have waited up until time \(t\), so the information visible to us is given by \(\mc {F}_t\). We want to make a best guesses for some information about \(X_T\) where \(T>t\). In symbols this means that we have chosen some (deterministic) function \(\Phi \) and we are interested in

\[\E [\Phi (X_T)\|\mc {F}_t].\]

In principle, we have access to all the information in \(\mc {F}_t\). However, in many cases it holds that 1

\begin{equation} \label {eq:markov_property} \E [\Phi (X_T)\|\mc {F}_t]=\E _{t,X_t}[\Phi (X_T)]. \end{equation}

Recall our intuition for conditional expectation: we view the left hand side of (14.7) as our best guess for \(\Phi (X_T)\), based on the information we have seen during \([0,t]\). On the right hand side, we simply start at time \(t\), fix the value of \(X_t\), run the stochastic process until time \(T\), and take expectations. Thus, the right hand side relies on much less information (in particular, we ignore the values of \(X_u\) for \(u\in [0,t)\) and only need to know \(X_t\)).

Equation (14.7) is known as the Markov property for the stochastic process \(X\). Not all stochastic processes are Markov, but many of the most useful ones are. Intuitively, the future (random) behaviour of a Markov process depends on its current value – but, crucially, the future doesn’t depend on the whole history of the process, just on the current value.

For our purposes we need only know that:

1 Formally, viewing random variables as functions on \(\Omega \), this equation reads \(\E [\Phi (X_T)](\omega )=\E _{t,X_t(\omega )}[\Phi (X_T)]\).

  • Lemma 14.2.1 All Ito processes satisfy the Markov property.

In particular, the formula (14.7) holds when \(X\) is Brownian motion, and when \(X\) is geometric Brownian motion.

  • Remark 14.2.2 \(\msconly \) \(\offsyl \) In Section 8.4 we introduced the strong Markov property for random walks. All Ito processes are strongly Markov, so (14.7) also holds at stopping times \(T\).