14.4 Exercises

  1. g-and-k distribution for financial returns continues

    Simulate a dataset following the specification given in the g-and-k distribution for financial returns example. Set \(\theta_1 = 0.8\), \(a = 1\), \(b = 0.5\), \(g = -1\), and \(k = 1\), with a sample size of 500, and use the same priors as in that example. Implement the ABC accept/reject algorithm from scratch using one million prior draws, selecting the 1,000 draws with the smallest distance.85

    • Perform a linear regression adjustment using the posterior draws of our ABC-AR algorithm (ABC-AR-Adj).
    • Compare the results with those obtained using the ABC-AR implementation in the EasyABC package, ensuring that the computational time is relatively similar between both implementations.
    • Compare the posterior results of ABC-AR, ABC-AR-Adj, and EasyABC with the population values.
  2. Simulation: g-and-k distribution continues

    Perform the simulation example of the Bayesian synthetic likelihood presented in the book, using the same population parameters and setting \(M = 500\), \(S = 6{,}000\), with burn-in and thinning parameters set to \(1,000\) and 5, respectively. Use the BSL package in R to perform inference using the vanilla, unbiased, semi-parametric, and misspecified (mean and variance) versions of BSL. Compare the posterior distributions of the methods with the true population parameters.

  3. Simulate a multinomial logit model (see Section 6.5) with 3 alternatives, 2 alternative-specific regressors, and 1 individual-specific regressor. The population parameters for the alternative-specific regressors are \(-0.3\) and \(1.2\), while the population values for the individual-specific regressor are \(0.3\), \(0\), and \(0.5\). All regressors are assumed to follow a standard normal distribution, and the sample size is \(1,000\).

    Perform inference using the INLA package, and note that the Poisson trick should be used for multinomial models in this exercise (see Serafini (2019) for details).

  4. Get the expression for the ELBO in the linear regression model with conjugate family.

  5. Linear regression continues

    Perform inference in the linear regression example using stochastic variational inference via the automatic differentiation variational inference (ADVI) approach (Kucukelbir et al. 2017), implemented in the rstan package. In particular, consider the model
    \[ y_i = \boldsymbol{x}_i^{\top} \boldsymbol{\beta} + \mu_i, \] assuming non-informative independent priors:
    \(\boldsymbol{\beta} \sim N(\boldsymbol{\beta}_0, \boldsymbol{B}_0)\) and \(\sigma^2 \sim {IG}(\alpha_0/2, \delta_0/2)\), where \(\boldsymbol{\beta}_0 = \boldsymbol{0}_3\), \(\boldsymbol{B}_0 = 1000\boldsymbol{I}_3\), and \(\alpha_0 = \delta_0 = 0.01\). The sample size is one million.

  6. Let’s retake the mixture regression model of Chapter 11, that is, the simple regression mixture with two components such that \(z_{i1} \sim \text{Ber}(0.5)\), consequently, \(z_{i2} = 1 - z_{i1}\), and assume one regressor, \(x_i \sim N(0, 1)\), \(i = 1, 2, \dots, 1,000\). Then,
    \[ p(y_i \mid \boldsymbol{x}_i) = 0.5 \phi(y_i \mid 2 + 1.5 x_i, 1^2) + 0.5 \phi(y_i \mid -1 + 0.5 x_i, 0.8^2). \] Let’s set \(\alpha_{h0} = \delta_{h0} = 0.01\), \(\boldsymbol{\beta}_{h0} = \boldsymbol{0}_2\), \(\boldsymbol{B}_{h0} = \boldsymbol{I}_2\), and \(\boldsymbol{a}_0 = [1/2 \ 1/2]^{\top}\).

    Perform VB inference in this model using the CAVI algorithm.

References

Kucukelbir, Alp, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. 2017. “Automatic Differentiation Variational Inference.” Journal of Machine Learning Research 18 (14): 1–45.
Serafini, Francesco. 2019. “Multinomial Logit Models with INLA.” R-INLA Tutorial. Https://Inla.r-Inla-Download.org/r-Inla.org/Doc/Vignettes/Multinomial.pdf.

  1. Note that this setting does not satisfy the asymptotic requirements for Bayesian consistency. However, it serves as a pedagogical exercise.↩︎