last updated: January 23, 2025

Bayesian Statistics

\CustomizeMathJax

6.3 Exercises on Chapter 6

  • 6.1 Show that the mode of the Gamma(α,β) distribution is α1β, where α1. What about α(0,1)?

  • 6.2 The following equations, written in Bayesian shorthand, are the key conclusions from results in earlier chapters of these notes. Which results are they from?

    • (a) f(x|y)=f(y,x)f(y).

    • (b) If θBeta(α,β) and x|θBernoulli(θ)n then θ|xBeta(α+k,β+nk), where x=(xi)1n and k=1nxi.

    Write the following results in Bayesian shorthand, using similar notation to that in parts (a) and (b).

    • (c) Lemma 4.2.1.

    • (d) From Section 4.5, the two facts above Lemma 4.5.2 concerning marginal and conditional distributions of the NGamma distribution.

  • 6.3 The following results are written in Bayesian shorthand.

    • (a) If xN(0,1) then x|{x>0}|x|.

    • (b) If x and y are independent then x|yx.

    In each case, write a version of the results in precise mathematical notation. Which parts of Chapter 1 are they closely related to?

  • 6.4 Suppose that we model x|θNegBin(m,θ)n, where mN is fixed and θ(0,1) is an unknown parameter.

    • (a) Show that f(x|θ)θmn(1θ)1nxi.

    • (b) Show that the prior θBeta(α,β) is conjugate to NegBin(m,θ)n, and find the posterior parameters.

    • (c)

      • (i) Show that the reference prior for θ is given by f(θ)θ1(1θ)1/2.

      • (ii) Does f(θ) define a proper distribution?

      • (iii) Find the posterior density f(θ|x) arising from this prior.

    Hint: The setup given is a Bayesian model with model family MθNegBin(m,θ)n.

  • 6.5 Suppose that we model x|μ,τN(μ,1τ)n, where both μ and τ are unknown parameters. We use the improper prior f(μ,τ)1τ for τ>0, and f(τ)=0 elsewhere.

    • (a) Show that for μR and τ>0 the posterior distribution satisfies

      f(μ,τ|x)τn21exp(τ2i=1n(xiμ)2).

    • (b) Find the marginal p.d.f of τ|x. Show that (μ,τ)|x is a proper distribution if and only if n2.

    Hint: The setup given is a Bayesian model with model family Mμ,τN(μ,1τ)n. For part (b) use the sample-mean-variance identity (4.10).

  • 6.6 Let (Mθ)θΠ be a continuous family of distributions. For i=1,2, let Θi be a continuous random variable with p.d.f. fΘi, both taking values in Rd. Let α,β(0,1) be such that α+β=1.

    • (a) Show that fΘ(θ)=αfΘ1(θ)+βfΘ2(θ) is a probability density function.

    • (b) Consider Bayesian models (X1,Θ1) and (X2,Θ2), with the same model family (Mθ) and different prior distributions. Consider also a third Bayesian model (X,Θ) with model family (Mθ) and prior Θ with p.d.f. fΘ(θ)=αfΘ1(θ)+βfΘ2(θ).

      Show that the posterior distributions of these three models satisfy

      fΘ|{X=x}(θ)=αfΘ1|{X1=x}(θ)+βfΘ2|{X2=x}(θ)

      where α=αZ1αZ1+βZ2 and β=βZ2αZ1+βZ2. Here Z1 and Z2 are the normalizing constants given in Theorem 3.1.2 for the posterior distributions of (X1,Θ1) and (X2,Θ2).

    • (c) Outline briefly how to modify your argument in (c) to also cover the case of discrete Bayesian models.

  • 6.7 This question explores the idea in Exercise 4.6 further, but except for (a)(ii) it does not depend on having completed that exercise.

    • (a) Let (Mθ) be a discrete or absolutely continuous family with range R. Let (X,Θ) be a Bayesian model with model family Mθn. Let xRn and write x(1)=(x1,,xn1), x(2)=(xn1+1,,xn). Let (X1,Θ) and (X2,Θ|{X1=x(1)}) be Bayesian models with model families Mθn1 and Mθn2, where n1+n2=n.

      • (i) Show that

        (Θ1|{X1=x(1)})|{X2=x(2)}=dΘ|{X=x}.

        Use likelihood functions to write your argument in a way that covers both the discrete and absolutely continuous cases.

      • (ii) What is the connection between this fact and Exercise 4.6?

    • (b) Rewrite your solution to (a)(i) in a Bayesian shorthand notation of your choice.