Stochastic Processes and Financial Mathematics
(part one)
8.4 The strong Markov property \(\msconly \)
Let us briefly consider a general filtered space \((\Omega ,\mc {F},(\mc {F}_n),\P )\), equipped with an adapted stochastic process \((X_m)\). The Markov property states that, for any \(m\in \N \), the following two processes have the same distribution:
-
1. the distribution of \((X_m,X_{m+1},X_{m+2},\ldots )\) given the information \(\sigma (X_m)\);
-
2. the distribution of \((X_m,X_{m+1},X_{m+2},\ldots )\) given the information \(\mc {F}_m\).
The key point is this: suppose that we are interested in the distribution of \((X_m,X_{m+1},X_{m+2},\ldots )\). This might depend on the value of \(X_m\), but if we know the value of \(X_m\) then values of \(X_1,\ldots ,X_{m-1}\) (alongside any other information in \(\mc {F}_m\)) provide us with no extra information. Heuristically, the future of \((X_m)\) might depend on its current value, but if we know that current value then we can ignore the past.
The Markov property is very natural. We might even argue that reality itself, with its generated \(\sigma \)-field, has the Markov property, but this is a philosophical question best discussed elsewhere. Most stochastic processes that are used in modelling are Markov processes. It is also natural to replace the time \(m\) by a stopping time \(T\). The strong Markov property states that \((X_T,X_{T+1},X_{T+2},\ldots )\) has the same distribution given the information \(\mc {F}_T\) as it would do if all we knew was \(\sigma (X_T)\).
We will need the strong Markov property for random walks within Chapter 9. However, for random walks we can say something even stronger: after a stopping time time \(T\), a random walk essentially restarts from its current location, and from then on behaves just like the same random walk, with its future movements being independent of its past.
We need some notation to state this precisely. Let \(S_n=\sum _{i=1}^n X_i\) where the \((X_i)\) are independent identically distribution random variables; this covers all the examples of random walks that we will study in Chapter 9.
-
Theorem 8.4.1 Let \(T\) be a stopping time. The following two processes have the same distribution:
-
1. the distribution of \((S_T,S_{T+1},S_{T+2},\ldots )\) given the information \(\sigma (S_T)\);
-
2. the distribution of \((S_T,S_{T+1},S_{T+2},\ldots )\) given the information \(\mc {F}_T\).
Moreover, in either case, the distribution of \((S_T,S_{T+1},S_{T+2},\ldots )\) is that of a random walk begun at \(S_T\) and incremented by an independent copy of \(X_i\) at each step of time.
-
More formally, Theorem 8.4.1 states that for all \(n\in \N \) and any function \(f:\R \to \R \) we have
\[\E [f(S_{T+n})\|\mc {F}_T]=\E [f(S_{T+n})\|\sigma (S_T)]=\E [f(S_T+S'_n)]\]
where \((S'_n)\) is an independent copy of \((S_n)\), with identical distribution. Strictly, this equation only holds provided that \(f(\cdot )\in L^1\) in each case, but we will always take \(f\) to be a nice enough function that this is not a problem.
It is often more useful to apply the strong Markov property in words, and this is common practice in probability theory. We’ll do it that way in Chapter 9. We won’t include a proof of Theorem 8.4.1 in this course. It isn’t hard to prove, but it is a bit tedious, and hopefully you can believe the result without needing a proof.