last updated: October 16, 2024

Stochastic Processes and Financial Mathematics
(part two)

\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\) \(\newcommand {\intertext }[1]{\text {#1}\notag \\}\) \(\let \Hat \hat \) \(\let \Check \check \) \(\let \Tilde \tilde \) \(\let \Acute \acute \) \(\let \Grave \grave \) \(\let \Dot \dot \) \(\let \Ddot \ddot \) \(\let \Breve \breve \) \(\let \Bar \bar \) \(\let \Vec \vec \) \(\DeclareMathOperator {\var }{var}\) \(\DeclareMathOperator {\cov }{cov}\) \(\DeclareMathOperator {\indeg }{deg_{in}}\) \(\DeclareMathOperator {\outdeg }{deg_{out}}\) \(\newcommand {\nN }{n \in \mathbb {N}}\) \(\newcommand {\Br }{{\cal B}(\R )}\) \(\newcommand {\F }{{\cal F}}\) \(\newcommand {\ds }{\displaystyle }\) \(\newcommand {\st }{\stackrel {d}{=}}\) \(\newcommand {\uc }{\stackrel {uc}{\rightarrow }}\) \(\newcommand {\la }{\langle }\) \(\newcommand {\ra }{\rangle }\) \(\newcommand {\li }{\liminf _{n \rightarrow \infty }}\) \(\newcommand {\ls }{\limsup _{n \rightarrow \infty }}\) \(\newcommand {\limn }{\lim _{n \rightarrow \infty }}\) \(\def \ra {\Rightarrow }\) \(\def \to {\rightarrow }\) \(\def \iff {\Leftrightarrow }\) \(\def \sw {\subseteq }\) \(\def \wt {\widetilde }\) \(\def \mc {\mathcal }\) \(\def \mb {\mathbb }\) \(\def \sc {\setminus }\) \(\def \v {\textbf }\) \(\def \p {\partial }\) \(\def \E {\mb {E}}\) \(\def \P {\mb {P}}\) \(\def \R {\mb {R}}\) \(\def \C {\mb {C}}\) \(\def \N {\mb {N}}\) \(\def \Q {\mb {Q}}\) \(\def \Z {\mb {Z}}\) \(\def \B {\mb {B}}\) \(\def \~{\sim }\) \(\def \-{\,;\,}\) \(\def \|{\,|\,}\) \(\def \qed {$\blacksquare $}\) \(\def \1{\unicode {x1D7D9}}\) \(\def \cadlag {c\`{a}dl\`{a}g}\) \(\def \p {\partial }\) \(\def \l {\left }\) \(\def \r {\right }\) \(\def \F {\mc {F}}\) \(\def \G {\mc {G}}\) \(\def \H {\mc {H}}\) \(\def \Om {\Omega }\) \(\def \om {\omega }\) \(\def \Vega {\mc {V}}\)

Chapter 13 Stochastic differential equations

The situation we have arrived at is that we know Ito integrals exist but, as yet, we are unable to calculate them or do much calculation with them. We will address this issue in Section 13.1 but first, in order to make our calculations run smoothly, we need to introduce some new notation.

We now understand how to make sense of equations of the form

\begin{equation} \label {eq:example_sde} X_t=X_0+\int _0^t F_u\,du+\int _0^t G_u\,dB_t \end{equation}

where \(B_t\) is Brownian motion. Note that we allow cases in which the stochastic processes \(F_u,G_u\) depend on \(X_u\) (for example, we could have \(F_u=X_t^2\)), but that \(X_t\) is an unknown stochastic process. Equations of this type are known as stochastic differential equations, or SDEs for short – an unfortunate name because they have no differentiation involved! They are also sometimes known as stochastic integral equations, but for historical reasons the term SDE has become the most commonly used.

‘Solving’ the equation (13.1) essentially means finding \(X_t\) in terms of \(B_t\). If \(F_t\) and \(G_t\) depend only on \(t\) and \(B_t\), then (13.1) is just an explicit formula, which automatically that tells us that there is a solution to (13.1). However, if \(F_t\) and/or \(G_t\) depend on \(X_t\) (e.g. \(F_t=2X_t\)) then (13.1) is not an explicit formula and there is no guarantee that a solution for \(X_t\) exists.

  • Remark 13.0.1 \(\offsyl \) The theory of existence and uniqueness of solutions to SDEs relies on analysis in more delicate ways than we have time to discuss in this course. We use the term ‘solution’ for what is usually referred to in the theory of SDEs as a ‘strong solution’.

    In general SDEs, like their classical counterparts ODEs, often do not have explicit solutions, and frequently have no solutions. Happily, though, in all the cases we need to consider, we will be able to write down explicit solutions.

Writing \(\int \)s everywhere is cumbersome, so it is common to ‘drop the \(\int \)s’ and write (13.1) as

\begin{align} dX_t&=F_t\,dt+G_t\,dB_t\label {eq:example_stoch_diff} \end{align} This equation has exactly the same meaning as (13.1), it is just written in different notation (to be clear: we are not differentiating anything). The notation \(dX_t, dB_t\) used in (13.2) is known as the notation of stochastic differentials, and we’ll use it from now on.

When we convert from stochastic differential form (13.2) to integral form (13.1) we can choose which limits to put onto the integrals. In (13.1) we choose \([0,t]\), but if \(v\leq t\) then we can also choose \([v,t]\), giving

\[X_t=X_v+\int _v^t F_u\,du+\int _v^t G_u\,dB_u.\]

(Rigorously, we can do this because (13.1) holds with \(v\) in place of \(t\), which we can then subtract from (13.2) (with \(t\) as written) to obtain limits \([v,t]\).)

We can rewrite our definition of an Ito process in our new notation.

  • Definition 13.0.2 A stochastic process \(X_t\) is an Ito process if it satisfies

    \[dX_t=F_t\,dt+G_t\,dB_t\]

    for some \(G\in \mc {H}^2\) and a continuous adapted stochastic process \(F\).

We need one more piece of notation. Given an Ito process \(X_t\), as in Definition 13.0.2, and a stochastic process \(H_t\), we will often write

\begin{equation} \label {eq:example_stoch_diff_2} dZ_t=H_t\,dX_t \end{equation}

which (as a definition) we interpret to mean that

\begin{align*} dZ_t&=H_t(F_t\,dt+G_t\,dB_t)\\ &=H_tF_t\,dt+H_tG_t\,dB_t. \end{align*} In integral form this represents

\[Z_t=Z_0+\int _0^tH_uF_u\,du+\int _0^t H_uG_u\,dB_u.\]

Of course, it is much neater to write (13.3).

  • Remark 13.0.3 \(\offsyl \) There is a limiting procedure that can extend the Ito integral, for a suitable class of stochastic processes \(Z\), to define \(\int _0^t H_u\,dZ_u\) directly: in similar style to (12.12) but with increments of \(Z_t\) in place of increments of \(B_t\). This approach relies on some difficult analysis, and we won’t discuss it in this course.

13.1 Ito’s formula

We have commented that, whilst we do know that Ito integrals exist, we are not yet able to do any serious calculations with them. In fact, the situation is similar to that of conditional expectation: direct calculation is usually difficult and, instead, we prefer to work with Ito integrals via a set of useful properties. If you think about it, this is also the situation in classical calculus – you rely on the chain and product rules, integration by parts, etc.

From 12.8, we already know that there are differences between Ito calculus and classical calculus. In fact, there is bad news: none of the usual rules1 used of classical calculus hold in Ito calculus.

There is also good news: Ito calculus does have its own version of the chain rule, which is known as Ito’s formula. Perhaps surprisingly, this alone turns out to be enough for most purposes2.

As in Definition 12.4.1, let \(X\) be an Ito process satisfying

\begin{equation} \label {eq:ito_differential} dX_t=F_t\,dt+G_t\,dB_t \end{equation}

where \(G\in \mc {H}^2\) and \(F\) is a continuous adapted process.

1 Meaning: the chain rule, product rule, quotient rule, integration by parts, inverse function rule, substitution rule, implicit differentiation rule, \(\dots \)

2 Actually, Ito calculus has a product rule too, but we won’t need it.

  • Lemma 13.1.1 (Ito’s formula) Suppose that, for \(t\in \R \) and \(x\in \R \), \(f(t,x)\) is a deterministic function that is differentiable in \(t\) and twice differentiable in \(x\). Then \(Z_t=f(t,X_t)\) is an Ito process and

    \[dZ_t=\l \{\frac {\p f}{\p t}(t,X_t) + F_t\frac {\p f}{\p x}(t,X_t) + \frac {1}{2}G_t^2\frac {\p ^2 f}{\p x^2}(t,X_t)\r \}\,dt+ G_t\frac {\p f}{\p x}(t,X_t)\,dB_t.\]

As in classical calculus, it is common to suppress the arguments \((t,X_t)\) of \(f\) and its derivatives. This results in simply

\begin{equation} \label {eq:intBtBt} \int _0^t B_u\,dB_u=\frac {B_t^2}{2}-\frac {t}{2}. \end{equation}

This shows that Ito calculus behaves very differently to classical calculus (of course, \(\int _0^t u\,du=\frac {u^2}{2}\)).

Sketch proof of Ito’s formula \(\offsyl \)

The proof of Ito’s formula is very technical, and even some advanced textbooks on stochastic calculus omit a full proof. We are now getting used to the principle that (in continuous time) proofs of most important results about stochastic processes make heavy use of analysis; in this case, Taylor’s theorem. We’ll attempt to give just an indication of where (13.5) comes from.

Fix an interval \([0,t]\) and take \(t_k\) such that

\[0=t_0<t_1<t_2<\ldots <t_n=t.\]

We plan eventually to take a limit as \(n\to \infty \), where the minimal distance between two neighbouring \(t_k\) goes to zero. Note that this in similar style to the limit used in the construction of Ito integrals. We’ll use the notation (just in this section)

\begin{align*} \Delta t&=t_{k+1}-t_k\\ \Delta B&=B_{t_{k+1}}-B_{t_k}\\ \Delta X&=X_{t_{k+1}}-X_{t_k}.\\ \end{align*}

We begin by writing

\[f(t,X_t)-f(0,X_0)=\sum \limits _{k=0}^{n-1} f(t_{k+1},X_{t_{k+1}})-f(t_{k},X_{t_{k}}).\]

Then, we apply the two dimensional version of Taylor’s Theorem to \(f\) on the time interval \([t_k,t_{k+1}]\) to give us

\begin{align} f(t_{k+1},X_{t_{k+1}})-f(t_{k},X_{t_{k}}) &=\frac {\p f}{\p t}\Delta t+\frac {\p f}{\p x}\Delta X + \frac 12\frac {\p ^2 f}{\p x^2}(\Delta X)^2 + \frac {\p ^2 f}{\p x^2}\Delta X\Delta t + \frac 12\frac {\p ^2 f}{\p x^2}(\Delta t)^2 \notag \\ &\hspace {3pc} + \text {[higher order terms]} \label {eq:ito_taylor} \end{align} We suppress the argument \((t_k,X_{t_k})\) of all partial derivatives of \(f\). In the ‘higher order terms’ we have terms containing \((\Delta X)^3, \Delta t(\Delta X)^2\) and so on. Using the SDE (13.4) we have

\begin{align*} \Delta X=X_{t_{k+1}}-X_{t_k}&=\int _{t_k}^{t_{k+1}} F_u\,du+ \int _{t_k}^{t_{k+1}} G_u\,dB_u\\ &\approx F_{t_k}\Delta t + G_{t_k}\Delta B. \end{align*} Summing (13.7) over \(\sum _k:=\sum _{k=0}^{n-1}\) and using this approximation, we have

\[f(t,X_t)-f(0,X_0)=I_1+I_2+I_3+J_1+J_2+J_3+\text {[higher order terms]}\]

where

\[ \begin {alignedat}{2} I_1&=\sum \limits _k \frac {\p f}{\p t}\Delta t && \to \int _0^t \frac {\p f}{\p x}\,du\\ I_2&=\sum \limits _k \frac {\p f}{\p x}F_{t_k}\Delta t && \to \int _0^t \frac {\p f}{\p x} F_u\,du\\ I_3&=\sum \limits _k \frac {\p f}{\p x}G_{t_k}\Delta B && \to \int _0^t \frac {\p f}{\p x} G_u\,dB_u\\ J_1&=\frac 12 \sum \limits _k \frac {\p ^2 f}{\p x^2}G_{t_k}^2(\Delta B)^2 \hspace {5pc} && \to \frac 12 \int _0^t \frac {\p ^2 f}{\p x^2}G_{u}^2\,du\\ J_2&=\sum \limits _k \frac {\p ^2 f}{\p x^2}F_{t_k}G_{t_k}(\Delta B)(\Delta t) && \to 0 \\ J_3&=\sum \limits _k \frac {\p ^2 f}{\p x^2}F_{t_k}^2(\Delta t)^2 && \to 0 \end {alignedat} \]

As we let \(n\to \infty \), and the \(t_k\) become closer together, \(\Delta t \to 0\) and the convergence shown takes place. In the case of \(I_1\) and \(I_2\) this is essentially by definition of the (classical) integral. For \(I_3\), it is by the definition of the Ito integral, as in (12.3). For \(J_1\), \(J_2\) and \(J_3\) the picture is more complicated; convergence in this case follows by an extension of exercise 11.7. Essentially, exercise 11.7 tells us that terms of order \(\Delta t\) matter and that \((\Delta B)^2\approx \Delta t\), resulting in

\[J_1\approx \frac 12 \sum \limits _k \frac {\p ^2 f}{\p x^2}G_{t_k}^2\Delta t \hspace {3pc}\to \frac 12 \int _0^t \frac {\p ^2 f}{\p x^2}G_{u}^2\,du.\]

However, \((\Delta t)^2\) and \((\Delta t)(\Delta B)\approx (\Delta t)^{3/2}\) are both much smaller than \(\Delta t\), with the result that the terms \(J_2\) and \(J_3\) vanish as \(\Delta t\to 0\). The higher order terms in (13.7) also vanish. Providing rigorous arguments to take all these limits is the bulk of the work involved in a full proof of Ito’s formula.

After the limit has been taken, Ito’s formula is obtained by collecting the non-zero terms \(I_1,I_2,I_3,J_1\) together and writing the result in the notation of stochastic differentials.