Chapter 14 Approximate Bayesian methods
Approximate Bayesian methods are a family of techniques designed to handle situations where the likelihood function lacks an analytical expression, is highly complex, or the problem is high-dimensional, whether due to a large parameter space or a massive dataset (G. M. Martin, Frazier, and Robert 2024a). In the former case, traditional Markov Chain Monte Carlo (MCMC) and importance sampling algorithms fail to provide a solution. In the latter, these algorithms struggle to produce accurate estimates within a reasonable time frame, unless users modify them (see Chapter 12).
However, there is no free lunch. Approximate Bayesian methods address these challenges at the cost of providing an approximation to the posterior distribution rather than the exact posterior. Nonetheless, asymptotic results show that the approximation improves as the sample size increases.
In this chapter, I first present simulation-based approaches, which are designed to address situations where the likelihood is highly complex and may lack an analytical solution. In the second part, I introduce optimization approaches, which are intended to handle high-dimensional problems. Specifically, I discuss approximate Bayesian computation (ABC) and Bayesian synthetic likelihood (BSL), the two most common simulation-based approaches. Then, I present integrated nested Laplace approximations (INLA) and variational Bayes (VB), the two most common optimization approaches for high-dimensional problems.