Stochastic Processes and Financial Mathematics
(part one)
Chapter A Solutions to exercises (part one)
Chapter 1
-
1.1 At time
we would have in cash and units of stock. This gives the value of our assets at time as . -
-
(a) Assuming we don’t buy or sell anything at time
, the value of our portfolio at time will be , where is as in the solution to 1.1. Since we need to be certain of paying off our debt, we should assume a worst case scenario for , that is . So, we are certain to pay off our debt if and only if -
(b) Since certainty requires that we cover the worst case scenario of our stock performance, and
, our best strategy is to exchange all our assets for cash at time . This gives us in cash at time . At time we would then have in cash at time and could pay our debt providing that
-
-
-
(a) We borrow
cash at time and use it to buy one stock. Then wait until time , at which point we will a stock worth , where is as in 1.1. We sell our stock, which gives us in cash. Since , we repay or debt (plus interest) which costs us . We then havein cash. This is an arbitrage.
-
(b) We perform the same strategy as in (a), but with the roles of cash an stock swapped: We borrow
stock at time and then sell it for in cash. Then wait until time , at which point we will have in cash. Since the price of a stock is now , where is as in 1.1, and , we now buy one stock costing and repay the stockbroker, leaving us within cash. This is an arbitrage.
-
-
1.4 We can calculate, using integration by parts, that
Similarly, we can calculate thatThis gives that
-
1.5 We can calculate
as
. Also,as
. -
1.6 We have
We want to turn the above integral into the distribution function of the standard normal. To do so, substitute , and we obtain Hence, has the distribution function of , which means . -
1.7 For
,and we obtain a finite answer only if
. When , we haveso we conclude that
is finite if and only if . -
1.8 As
,-
•
because . -
•
oscillates through and does not converge. -
•
converges to zero by the sandwich rule since . -
•
is a geometric series, and . -
•
tends to infinity, becauseeach term contained in a
is greater than or equal to .
-
-
1.9 Recall that
. Define and thenThen for all
we have . Hence,
Chapter 2
-
2.1 The result of rolling a pair of dice can be written
where . So a suitable would beOf course other choices are possible too. Since our choice of
is finite, a suitable -field is . -
-
(a) Consider the case of
. We need to check the three properties in the definition of a -field.-
1. We have
so the first property holds automatically. -
2. For the second we check compliments:
, , and ; in all cases we obtain an element of . -
3. For the third, we check unions. We have
. Including into a union doesn’t change anything; including into a union results in . This covers all possible cases, in each case resulting in an element of .
So,
is a -field. Since is just with the roles of and swapped, by symmetry is also a -field. -
-
(b) We have
and , but and . Hence is not closed under unions; it fails property 3 of the definition of a -field.However,
is the intersection of two -fields, so is automatically itself a -field. (Alternatively, note that , and check the definition.)
-
-
-
(a) Let
be the event that the sequence contains precisely heads. Let be the event that we see precisely heads during the first tosses and, from then on, only tails. Then, If occurs then at least one of occurs. Hence,.
-
(b) Let
be the event that the sequence contains finitely many heads. Then, if occurs, precisely one of occurs. Hence,That is, the probability of having only finite many heads is zero. Hence, almost surely, the sequence
contains infinitely many heads.By symmetry (exchange the roles of heads and tails) almost surely the sequence
also contains infinitely many tails.
-
-
-
(a)
contains the information of whether a was rolled. -
(b)
gives the information of which value was rolled if a , or was rolled, but if the roll was in then cannot tell the difference between these values. -
(c)
contains the information of which of the sets , and contains the value rolled, but gives no more information than that.
-
-
-
(a) The values taken by
areHence, the pre-images are
, and . These are all elements of , so . However, they are not all elements of (in fact, none of them are!) so . -
(b) Define
Then the pre-images are
and . Hence .It isn’t possible to use unions/intersections/complements to make the set
out of (one way to see this is note that as soon as we tried to include we would also have to include , meaning that we can’t make ). Hence .
-
-
2.6 We have
, and . This givesTo work this out, you could note that
must be in for , then also include and , then keep adding unions and complements of the events until find you have added them all. (Of course, this only works because is finite!) -
2.7 Since sums, divisions and products of random variables are random variables,
is a random variable. Since is non-zero, we have that is a random variable, and using products again, is a random variable.For
, recall that for any we haveSince
is a random variable so is . Since limits of random variables are also random variables, so is -
2.8 Since
we have for all .so
, butso
. -
2.9 Let
and . Our plan is to generalize the argument that we used to prove (2.7).First, we condition to see that
Hence,
Here we use that, if then , and if then since we have . So . -
2.10 First, we condition to see that
From (A.1) since
we have This second inequality follows by looking at two cases: if then both sides are zero, but if then we can use that . Using monotonicity of , we then haveDividing through by
finishes the proof.This result is known as Markov’s inequality.
-
2.11 Since
, Markov’s inequality (from 2.10) gives us thatfor any
. Hence,
Chapter 3
-
3.1 We have
so Here, we use the linearity of conditional expectation, followed by the fact that is measurable and is independent of with .Secondly,
so Here, we again use linearity of conditional expectation to deduce the first line. To deduce the second line, we use that and are measurable (using the taking out what is known rule for the middle term), whereas is independent of . The final line comes from (which we already knew from above) and that . -
3.2 For
, we have , so is also measurable. For each we have so , hence . Lastly, Here, we use linearity of conditional expectation in the first line. To deduce the second, we use that for and that is independent of in the second. For the final line we note that . -
-
(a) Since
for all , we have . Since for all , we have , so is bounded and hence for all . Lastly, Here, to deduce the second line we use that for . To deduce the third line we use that is independent of and then to deduce the forth line we use that . -
(b) By definition of conditional expectation (Theorem 3.1.1), we have
and for all . It remains only to check that Here, to get from the first to the second line we use the tower property.
-
-
3.4 Note that the first and second conditions in Definition 3.3.3 are the same for super/sub-martingales as for martingales. Hence,
satisfies these conditions. For the third condition, we have that (because is a supermartingale) and (because is a submartingale. Hence , so is a martingale. -
-
(a) If
then because is measurable. For , we haveHere we use the tower property to deduce the first equality (note that
so ) and the fact that is a martingale to deduce the second inequality (i.e. ). Iterating, from we obtain thatand the martingale property then gives that this is equal to
, as required. -
(b) If
is a submartingale then , whereas if is a supermartingale then .
-
-
3.6 We have
so as required. Here, we use the taking out what is known rule (since ) and the martingale property of .It follows that
. We have and since we have . Hence is a submartingale. -
3.7 Using that
we can calculateWe want this to be equal to
, and . So we needHence,
is our choice. It is then easy to check thatand thus
is a martingale, as required. -
3.8 Note that
is constant, by symmetry (i.e. permuting the
does not change the distribution of ). Hence, if we set then Here we use the linearity of conditional expectation and the fact that is measurable. Hence,
Chapter 4
-
4.1 Since
and , also . Since , we have , so is bounded and hence . Hence also and .Lastly,
Here, we use the taking out what is known rule and the fact that (and hence also ) is independent of . We also calculate the expectation of the discrete random variable , using that . This gives us that where to deduce the last line we use that . Thus is a martingale and is a submartingale. -
4.2 Since
for all , we have . We have so and (note ), which implies that and for all .We have
Here we use the taking out what is known rule, followed by the fact that is independent of and the relationship between conditional expectation and independence. To deduce the final step we use that . Therefore, we have , which means that is a submartingale.We now look at
. We have Here we use the taking out what is known rule, followed by the fact that is independent of and the relationship between conditional expectation and independence. To deduce the final step we use that -
4.3 Since
for all we have where . Further, if we set then so .We have
where . Therefore, is a martingale if and only if . -
4.4 Clearly
(recall that, by default, we take to be the generated filtration of ) hence and . We have so by the triangle law and monotonicity of expectation, hence . Similarly ,We have
Here, the second line follows by linearity. The third line follows by the taking out what is known rule and the independence rule, using that is independent of , whilst . The fourth line follows because and . Hence is a martingale.Subtracting
from both sides, we obtain . Hence is a submartingale. -
4.5 The natural filtration is
. We can writeHence, if
then . Since , a trivial induction shows that for all . hence, and also .We have
(because the walk is reflected at zero and can increase by at most in each time step) so is bounded and hence .Lastly,
Here we use linearity and the taking out what is known rule, as well as that is independent of . Hence, Here we use (A.2) to deduce the second line, as well as taking out what is known. Hence, is a martingale. -
4.6 Let
be number of red balls in the urn at time (i.e. after the draw is complete). Then, the proportion of red balls in the urn just after the step isEssentially the same argument as for the two colour urn of Section 4.2 shows that
is adapted to the natural filtration and also in . We have Hence is a martingale. -
4.7 By the result of 3.6,
is a submartingale.-
(i)
is measurable (because ) and since we have , so . Further, using the submartingale property of ,so
is a submartingale. But , so (by Lemma 3.3.6) we have that is not a martingale. -
(ii) We have shown in Section 4.1 that
is a martingale, and in Exercise 4.4 that is a martingale. Hence and by the triangle law, so also . Also, as required. To deduce the second line we use the martingale properties of and . As all martingales are submartingales, this case is also a submartingale.More generally, the sum of two martingales is always a martingale. The same is true for submartingales and also for supermartingales.
-
(iii)
is measurable (because ) and since we have , so . Since is a martingale, we haveHence
is not a martingale. Also, since may be both positive or negative, we do not have , so is also not a submartingale.
-
-
4.8 We showed that
was a martingale in 4.7. We try to mimic that calculation and find out what goes wrong in the cubic case. So, we look at Here we use linearity, taking out what is known and that fact that (from 4.7). However, also , so because is independent of and (it takes values only or ) so . Putting these two calculations together, we haveSuppose (aiming for a contradiction) that there is a deterministic function
such that is a martingale. ThenCombining the above equation with (A.3) gives
but this is impossible because
is not deterministic. Thus we reach a contradiction; no such function exists.
Chapter 5
-
5.1 The value of the portfolio
at time will bewhere
is a random variable with and . -
-
(a) We need a portfolio
such that , so we need that Solving this pair of linear equations gives and . So our replicating portfolio consists simply of cash and no stock.Hence, the value of this contingent claim at time
is . -
(b) Now we need that
Which has solution and .Hence, the value of this contingent claim at time
is .
-
-
-
(a) If we buy two units of stock, at time
, for a price then our contingent claim is . -
(b) In a European put option, with strike price
, we have the option to sell a single unit of stock for strike price . It is advantageous to us to do so only if , so -
(c) We
we sell a unit of stock for strike price and otherwise do nothing. So our contingent claim is -
(d) Holding both the contracts in (b) and (c) at once, means that in both
and , we end up selling a single unit of stock for a fixed price . Our contingent claim for doing so is .
-
-
5.5 In the binomial model, the contingent claim of a European call option with strike price
is . -
5.6 A tree-like diagram of the possible changes in stock price, with the value of the contingent claim
written on for the final time step looks likeBy (5.3), the risk free probabilities in this case are
Using Proposition 5.2.6 recursively, as in Section 5.6, we can put in the value of the European put option at all nodes back to time
, givingHence, the value of our European put option at time
is .Changing the values of
and do not alter value of our put option, since and do not appear in the pricing formulae. -
5.7 A tree-like diagram of the possible changes in stock price, with the value of the option at all nodes of the tree shown in (green) square boxes, and hedging portfolios
written (in red) for each node of the tree:The hedging portfolios are found by solving linear equations of the form
, , where is the contingent claim and the initial stock price of the one period model associated to the given node. The value of the contingent claim at each node is then inferred from the hedging portfolio as . -
5.8 A tree-like diagram of the possible changes in stock price, with the value of the option at all nodes of the tree shown in (green) square boxes, and hedging portfolios
written (in red) for each node of the tree:The hedging portfolios are found by solving linear equations of the form
, , where is the contingent claim and the initial stock price of the one period model associated to the given node. The value of the contingent claim at each node is then inferred from the hedging portfolio as .The hedging portfolio is the same at each node. This is because the contingent claim of our call option always exceeds zero i.e. we are certain that at time
we would want to exercise our call option and buy a single unit of stock for a price . Since there is no interest payable on cash, this means that we can replicate our option (at all times) by holding a single unit of stock, and in cash. -
5.9 We have that
is adapted (see Section 5.4) and since we have also that . Hence, is a martingale under if and only if . That is if is equal to . Hence, is a martingale under if and only ifNow consider
, and assume that and , which implies . Since is adapted, so is . Since we have so also . We haveHence,
Here we use taking out what is known, since and for , and also that is independent of . Hence, is a martingale under if and only if .
Chapter 6
-
6.1 Since
we havewhich tends to
as . Hence in . Lemma 6.1.2 then implies that also in probability and in distribution. Also, and , so by the sandwich rule we have as , and hence , which means that almost surely. -
-
(a) We have
and henceHere we use the linearity and absolute value properties of expectation. Hence,
. -
(b) Let
be a random variable such that and note that . Set for all , which implies that as . But so does not converge to in .
-
-
6.3 We have
when , and when . Hence, and by monotonicity of we havewhich means that
.We can write
Hence, .By Lemma 6.1.2, it follows that also
in probability and in distribution. -
6.4 We have
for all , so and . Hence for all . So, for example, with we have , which does not tend to zero as . Hence does not converge to in probability.However,
and have the same distribution, given byso we do have
as . Hence . -
6.5 Let
be the sequence of random variables from 6.1. Define .-
(a) For each
, we have . Hence, is an increasing sequence. Since for all we havemeaning that the sequence
is bounded above by . -
(b) Hence, since bounded increasing sequences converge, for any
the sequence converges. -
(c) The value of
is either or , both with probability . The value of is either , all with probability . The value of is either all with probability . -
(d) We can guess from (c) that the
are becoming more and more evenly spread across , so we expect them to approach a uniform distribution as . -
(e) We will use induction on
. Our inductive hypothesis is-
(IH)
: For all , it holds that .
In words, this says that
is uniformly distributed on .For the case
, we have so , hence (IH) holds.Now, assume that (IH)
holds. We want to calculate for . We consider two cases, dependent on whether is even or odd.-
• If
is even then we can write for some . Hence, Here we use that and are independent, and use (IH) to calculate . -
• Alternatively, if
is odd then we can write for some . Hence, Here, again, we use that and are independent, and use (IH) to calculate .
In both cases we have shown that
, hence (IH) holds. -
-
(f) We know that
almost surely, hence by Lemma 6.1.2 the convergence also holds in distribution. Hence .From part (e), for any
we haveHence
, and the sandwich rule tells us that . Hence , for all , which means that has a uniform distribution on .
-
-
6.7 Since
we have , and since we have . Further, for all , whenever we have , and since this means that . Hence .Therefore, the monotone convergence theorem applies and
. -
6.6 Define
. Then and , almost surely. Hence, satisfies the conditions for the dominated convergence theorem and there exists a random variable such that and . Define and then we have and . -
-
(a) The equation
follows by iterating the linearity property of
times (or by a trivial induction). However, in both sides are defined by infinite sums, which are limits, and the linearity property does not tell us anything about limits. -
(b) We’ll take a limit as
on each side of (A.5). We’ll have to justify ourselves in each case.First, consider the right hand side. Since we have
almost surely, by monotonicity of . Therefore, is an increasing sequence of real numbers. Hence it is convergent (either to a real number or to ) and, by definition of an infinite series, the limit is written .Now, the left hand side, and here we’ll need the monotone convergence theorem. Define
. Then almost surely, so . Since for all , also , almost surely. Hence the monotone convergence theorem applies and where .Putting both sides together (and using the uniqueness of limits to tell us that the limits on both sides must be equal as
) we obtain that . -
(c) Since
, we have and hence also . However, the limit is not well defined, because oscillates (it jumps up/down by on each step) and does not converge. Hence, in this case is not well defined.
-
-
-
(a) We have both
and , so by uniqueness of limits for real sequences, we have for all . Hence, and have the same distribution (i.e. they have the same distribution functions ). -
(b) By definition of convergence in probability, for any
, for any there exists such that, for all ,By the triangle inequality we have
If
then or (or possibly both). Hence, continuing (A.6),Since this is true for any
and any , we have .
-
-
6.7 Suppose (aiming for a contradiction) that there exists and random variable
such that in probability. By the triangle inequality we haveHence, if
then or (or both). Therefore,Since
in probability, the right hand side of the above tends to zero as . This implies thatas
. But, and are independent and so for all . Therefore we have a contradiction, so there is no such that in probability.Hence, by Lemma 6.1.2, we can’t have any
such that almost surely or in (since it would imply convergence in probability).
Chapter 7
-
-
(a) We have
-
(b) We have
-
(c) We have
In the last line we use that .
-
-
7.2 We have
as required. -
-
(a) We note that
Hence
. Since for all , we have also that . Lastly, Therefore, is a martingale. In the above, in the second line we use that and that is independent of . To deduce the last line we use that .We next check that
is uniformly bounded in . We have . Using that , monotonicity of expectations gives thatHence
, so is uniformly bounded in .Therefore, the martingale convergence theorem applies, which gives that there exists some real-valued random variable
such that . -
(b) Note that
takes values in and that, from part (b), almost surely we have that converges to a real value. It follows immediately by Lemma 7.4.1 that (almost surely) there exists such that for all .
-
-
7.4 Here’s some R code to plot
time-steps of the symmetric random walk. Changing the value given to set.seed will generate a new sample.> T=1000 > set.seed(1) > x=c(0,2*rbinom(T-1,1,0.5)-1) > y=cumsum(x) > par(mar=rep(2,4)) > plot(y,type="l")
Adapting it to plot the asymmetric case and the random walk from exercise 7.3 is left for you.
In case you prefer Python, here’s some Python code that does the same job. (I prefer Python.)
import numpy as np import matplotlib.pyplot as plt p = 0.5 rr = np.random.random(1000) step = 2*(rr < p) - 1 start = 0 positions = [start] for x in step: positions.append(positions[-1] + x) plt.plot(positions) plt.show()
Extending this code as suggested is left for you. For the last part, the difference in behaviour that you should notice from the graph is that the walk from Exercise 7.3 eventually stops moving (at some random time), whereas the random walk from Q2 of Assignment 3 never keeps moving but makes smaller and smaller steps.
-
7.5 We will argue by contradiction. Suppose that
was uniformly bounded in . Then the martingale convergence theorem would give that for some real valued random variable . However, takes values in and we have for all , so Lemma 7.4.1 gives that cannot converge almost surely. This is a contradiction. -
-
(a) Take the offspring distribution to be
. Then , and hence for all , so dies out. -
(b) Take the offspring distribution to be
. Then , so , and hence .
-
-
-
(a) We have
This is clearly not equal to , so is not a martingale.This result might be rather surprising, but we should note that the urn process here is not ‘fair’, when considered in terms of the proportion of (say) red balls. If we started with more red balls than black balls, then over time we would expect to see the proportion of red balls reduce towards
, as we see in part (b): -
(b) Your conjecture should be that, regardless of the initial state of the urn,
. (It is true.)
-
-
-
(a) Let
be the number of red balls in the urn at time , and then . Let us abuse notation slightly and write for the event that the ball is red, and for the event that the ball is black. Let denote the first ball drawn on the draw, and denote the second ball drawn.Let
. It is clear that , so , and that is measurable with respect to . We have -
(b) Since
we have , hence is uniformly bounded in . Hence, by the martingale convergence theorem there exists a random variable such that .Since
, this means that also . However, is integer valued so, by Lemma 7.4.1, if is to converge then it must eventually become constant. This can only occur if, eventually, the urn contains either (i) only red balls or (ii) only black balls.In case (i) we have
eventually, which means that . In case (ii) we have eventually, which means that . Hence, almost surely.
-
-
-
(a) Consider the usual Pólya urn, from Section 4.2, started with just one red ball and one black ball. After the first draw is complete we have two possibilities:
-
1. The urn contains one red ball and two black balls.
-
2. The urn contains two red balls and one black ball.
In the first case we reach precisely the initial state of the urn described in 7.9. In the second case we also reach the initial state of the urn described in 7.9, but with the colours red and black swapped.
If
, then in the first case the fraction of red balls would tend to zero, and (by symmetry of colours) in the second case the limiting fraction of black balls would tend to zero; thus the limiting fraction of black balls would always be either (the first case) or (the second case). However, we know from Proposition 7.4.3 that the limiting fraction of black balls is actually uniform on . Hence, . -
-
(b) You should discover that having a higher proportion of initial red balls makes it more likely for
(the limiting proportion of red balls) to be closer to . However, since is a random variable in this case, you may need to take several samples of the process (for each initial condition that you try) to see the effect.Follow-up challenge question: In fact, the limiting value
has a Beta distribution, . You may need to look up what this means, if you haven’t seen the Beta distribution before. Consider starting the urn with red and black balls, and see if you can use your simulations to guess a formula for and in terms of and (the answer is a simple formula!).
-
-
-
(a) We have that
is a non-negative martingale, hence . Thus is uniformly bounded in and the (first) martingale convergence theorem applies, giving that for a real valued . Since almost surely we must have almost surely. -
(b) The possible values of
are . Note that for all .On the event that
there exists such that for all sufficiently large . Choose such that and then for all sufficiently large we have . For such , since we have , which implies that could not converge. Hence, the only possible almost sure limit is .If
was uniformly bounded in then by the (second) martingale convergence theorem we would have . However and , so this is a contradiction. Hence is not uniformly bounded in . -
(c) We have
. Noting that and hence , taking logs gives that , which gives .For the final part, if
is a simple asymmetric random walk with upwards drift, then is a simple asymmetric random walk with downwards drift. Applying what we’ve just discovered thus gives , and multiplying by gives the required result.
-
-
-
(a) This is a trivial induction: if
is even then is odd, and vice versa. -
(b) By part (a),
since is an even number and is an odd number.If
is to start at zero and reach zero at time , then during time it must have made precisely steps upwards and precisely steps downwards. The number of different ways in which the walk can do this is (we are choosing exactly steps on which to go upwards, out of total steps). Each one of these ways has probability of occurring, so we obtain . -
(c) From (b) we note that
Hence, using the hint, we have
Iterating this formula obtains that
Recall that as , and that as . Hence, the right hand side of the expression above tends to zero as . Hence, since , by the sandwich we have as .
-
-
-
(a) We note that
Here we use that and that is independent of .Since
is a martingale we must haveSince
this implies that , which implies that . Since this holds for any , we have for all ; so is constant. -
(b) If
is to be a supermartingale then, in place of (A.8), we would needNote that we would need this equation to hold true with probability one. To handle the
we now need to care about the sign of . The key idea is that can be both positive or negative, so the only way that (A.9) can hold is if .We will use some of the results from the solution of exercise 7.11. In particular, from for odd
from 7.11(b) we have , and for even from (A.7) we have . In both cases we have . Since and have the same distribution we have , and henceHence, for all
we have and .Now, consider
. We have that here is positive probability that . Hence, by (A.9), we must have , which means that . But, there is also positive probability that , hence by (A.9) we must have , which means that . Hence , for all , and by a trivial induction we have for all .
-
-
-
(a) Since
we note that Further,so it makes sense to define
. Then ,and
are independent. Moreover, the are identically distributed and independent of . Hence, by taking out what is known ( ) we have So, from (A.10) we obtain -
(b) By part (a) and (3.4), we have
Taking expectations, and using that
, we obtain
-
-
-
(a) Note that
for all , because taking an over is an infimum over a smaller set than over . Hence, is monotone increasing and hence the limit exists (almost surely). -
(b) Since
, we have . By definition of , for each and there exists some such that . Combining these two properties, we obtain -
(c) Letting
, in the above equation, which means that also because , we take an almost sure limit to obtain . Since we have this inequality for any , in fact almost surely. -
(d) We have shown that
is an increasing sequence and , and since each , we have also . Hence, the monotone convergence theorem applies to and , and therefore . -
(e) From the monotone convergence theorem, writing in terms of
rather than , we obtainHowever, since
, the monotonicity property of expectation gives us that, for all ,Combining (A.11) and (A.12), with the fact that limits preserve weak inequalities, we obtain that
, as required.
-
-
-
(a) All the
have the same distribution, and are independent by independence of the and . Thus, is a Galton-Watson process with off-spring distribution given by the common distribution of the . -
(b) By definition of
we have . Hence, from (7.9), if then also . Since , an easy induction shows that for all . -
(c) We have
Recall that each has the same distribution as . Hence,Since
we therefore have . Moreover,It is clear that
is continuous (in fact, is linear). Hence, by the intermediate value theorem, there is some such that . -
(d) By (c), we can choose
such that . Hence, , so by (a) is a Galton-Watson process with an offspring distribution that has mean . By Lemma 7.4.6, dies out, almost surely. Since, from (b), , this means must also die out, almost surely.
-
Chapter 8
-
8.1 We have
so, for all , as . Hence almost surely as . Further, for all and , so the dominated convergence theorem applies and shows that as . -
8.2 We check the two conditions of the dominated convergence theorem. To check the first condition, note that if
then . Hence, since , as we have almost surely.To check the second condition, set
and then so . Also, , so is a dominated random variable for . Hence, the dominated convergence theorem applies and . -
-
(a) We look to use the dominated convergence theorem. For any
we have , hence for all such that we have . Therefore, as , , which means that almost surely.We have
and , so we can use are the dominating random variable. Hence, by the dominated convergence theorem, . -
(b) We have
which means
and also that As , we have , hence (using that is a continuous function) we have . Hence, . -
(c) Suppose that we wanted to use the DCT in (b). We still have
almost surely, but any dominating random variable would have to satisfy for all , meaning that also , which means that ; thus there is no dominating random variable . Therefore, we can’t use the DCT here, but we have shown in (b) that the conclusion of the DCT does hold: we have that does tend to zero.We obtain that the conditions of the DCT are sufficient but not necessary for its conclusion to hold.
-
-
8.4 Note that
If is a stopping time then both parts of the right hand side are elements of , hence so is .Conversely, if
for all , then for we have . Hence the identity shows that , and therefore is a stopping time. -
8.5 We can write
. By exercise 8.4 we have for all , hence for all . -
8.6 We can write
Since
and are both stopping times we have for all , which means that for all . Hence and therefore is also a stopping time. -
8.7 If
then . Since is a stopping time we have , henceAs
we have , hence . -
-
(a) We have
Note that
if and only if for all . That is, if and only if we pick a red ball out of the urn at times . Hence, Therefore, since for all , we have and . -
(b) We note that
is a stopping time, with respect to , sinceWe showed in Section 4.2 that
was a martingale. Since the process is bounded. By part (a) for any we have , hence and . Hence, we have condition (b) of the optional stopping theorem, soBy definition of
we have . Hence , so we obtain .
-
-
-
(a) We showed in exercise 7.8 that, almost surely, there was a point in time at which the urn contained either all red balls or all black balls. Hence,
.Moreover, almost surely, since the urn eventually contains either all red balls or all black balls, we have either
(all red balls) or (all black balls). -
(b) We note that
is a stopping time, sinceIn exercise 7.8 we showed that
was a martingale. Since in fact is a bounded martingale, and we have from above, so conditions (b) of the optional stopping theorem apply. Hence,Lastly, since
is either or (almost surely) we note that Hence, .
-
-
-
(a) We use the filtration
. We have and since we have also that . Also, Here we use taking out what is known (since ), along with the definition of our urn process to calculate e.g. as a function of . Hence is a martingale. -
(b) Since
, the process is a bounded martingale. The time is bounded above by , hence condition (b) for the optional stopping theorem holds and . Since this gives us . Hence,
as required. -
Chapter 9
-
-
(a) We have that
is a stopping time and (from Section 4.1) is a martingale. We want to apply the optional stopping theorem to .We have
from Lemma 9.1.1 and for all we haveThus, condition (c) for the optional stopping theorem applies and
. That is,We thus have
as required. -
(b) We have
Putting equations (9.5) and (9.6) into the above, and then using part (a) gives us
-
-
-
(a) We have
from Lemma 9.3.2. Hence also . Since these two possibilities are disjoint, in fact .For the second equation, we will use that
is a martingale. We want to apply the optional stopping theorem, but none of the conditions apply directly to . Instead, set and note that Lemma 8.2.4 implies that is a martingale. By definition of we have , hence is bounded. We have from above, hence conditions (b) of the optional stopping theorem apply to . We thus deduce thatHence
as required. In the above, the first line partitions on the value of (using what we already deduced). The third line follows from the second because , hence , and similarly for . -
(b) Solving the two equations (this is left for you) gives that
and . -
(c) From Exercise 4.4 we have that
is a martingale. Again, none of the conditions of optional stopping directly apply, but is a martingale by Theorem 8.2.4. Similarly to above, we have , so is bounded. Hence conditions (b) of the optional stopping theorem apply andThus
, which gives Here, the second line follows because when we have , hence also , and similarly for . The fourth line uses part (b).
-
-
-
(a) Note that movements up at by
, whilst movements down are by . In order for to return to zero at time , it must have made twice as many movements down as up. In particular must therefore be a multiple of , so .For
, we can count the possible ways to have movements up and movements down by choosing upwards movements out of movements, that is . Taking the probabilities into account and then using (9.7) we obtain -
(b) The argument that
, under the assumption , is exactly the same as before.We need to modify the next part, beginning with (9.9) which is replaced by the result in part (a). Note that that
now plays the role of what was previously , being the possible returns to zero. Having (a) introduces an extra factor of , but using the same method as before we can still deduce that . We thus arrive at a contradiction in the same way.
-
-
-
(a) We have
. The condition implies that . Writing , it follows from the strong law of large numbers that (almost surely) there exists such that for all we have . Hence . -
(b) We calculate
In the first line of the above we partition on the value of . The first term of the second line follows because if then , and the other terms use the relationship between expectation and conditional expectation. The third line follows because .The fourth line uses that, on the event
, is equal to (accounting for the first move ) plus two independent copies ( and ) of (accounting for the time to move from and then ). Similarly, on the event , is equal to plus three independent copies of . Note that it is crucial here that can only jump upwards by in each step of time, meaning that it cannot go above without first being equal to .The strong Markov property gives that
and are independent of , leading to the fifth line. The remainder of the calculation is straightforward. We thus obtainSolving this equation gives
.
-
-
9.5 Recall that
even when even, and odd when is odd. Hence is odd. From the fact given in the question, there exists such that for all we have . Henceas required.
-
-
(a) Note that
. Since for all , for all by Proposition 2.2.6. Since we have , hence . We have also that Here we use the taking out what is known rule, the fact that is independent of and the relationship between conditional expectation and independence. To deduce the final line we note that .Note: This is a more general case of the martingale from Exercise 4.1.
-
(b) Condition (1) fails because
is not bounded – to see this, note that for any there is a positive probability that remains below zero, in which case . Condition (b) fails because is unbounded. Condition (c) fails because we have by Lemma 9.3.4, and so also . -
(c) By Lemma 8.2.4 the process
is a martingale, and by definition of we have that . Since , we thus have , so the martingale is bounded. By Lemma 9.3.2 , so conditions (b) of the optional stopping theorem apply to and , henceBy Lemma 9.3.2 we have
, hence almost surely either or . Hence, In the above, the final line follows by symmetry about zero, which implies that the two expectations are in fact equal. Thus , for all . With this in hand we can calculate In the first line we partition on whether is equal to or . In the second line we apply the formulae deduced above at both and , and to deduce the final line we recall that .Note:
and are not independent, but if they were then it would be possible to write a simpler solution. Check that you didn’t fall into this trap.
-
-
9.7 We have show in Lemma 9.5.1 that
will almost surely return to the origin in finite time. By applying the strong Markov property at each return to the origin, we obtain that will return to the origin infinitely often.Fix some
. There is a path from the origin to , and with positive probability the walk will take that path, hence is positive. By the strong Markov property, this means the probability that visits in between any two successive visits to the origin, is also positive. By the strong Markov property, whether this occur is independent, for each successive pair of returns to the origin. Hence we have infinitely many trials to try and visit , each with the same positive success probability. This will result in infinitely many visits to , almost surely.This holds for any
, which completes the argument. -
-
(a) If
then the set much be infinite, meaning that has returned infinitely many times to the origin. However, this has probability zero; by the strong Markov property, after each visit there is a probability of not returning again, independently of the past; eventually that will happen. -
(b)
is not a stopping time. One way to see this is to note that , which depends on the whole path taken by the walk, so is not an element of for any . -
(c) The event
is the event that , but the walk is never zero after that time. Hence Note that we have used the Markov property to deduce the fifth line.Recall that
is even if and only if is even. Taking the above equation and summing over , noting that we obtain The required result follows by noting that . -
(d) During the proof of Lemma 9.3.4, whilst we were arguing by contradiction, we noted that if
then would have a Geometric( ) distribution. In part (c) we’ve discovered the mean of this geometric distribution. We also came close, for the same reason, during the proofs of Lemmas 9.5.1 and 9.5.2, and in Remark 9.4.2. In all of those cases, all we needed was to note that would be finite – we didn’t need to know its value.
-
-
9.9 Let
. Note that is a finite set.Consider some
. If visits then, the first time that it does, we may apply the strong Markov property. From then on the walk will behave independently of its past, again moving one unit of space in a random direction on each step of time. Apply part (c) of Exercise 9.9, the expected number of returns to is finite, hence there will almost surely be a last return time to .We can do the above for any
. Since is finite, this means that almost surely there is a last return time to the set . That is, there exists some random variable such that for all . Hence for all . Since this holds for any , we have .