13.11 Summary
In this chapter, we review some of the most common approaches to causal inference. We begin by outlining the identification restrictions underlying each method and then describe how these restrictions are incorporated into a Bayesian framework for inference.
A natural point of departure is the use of Directed Acyclic Graphs (DAGs), which provide a graphical representation of the underlying structural or causal model. The next step is to identify an exogenous source of variation in the assignment rule, such as instruments or institutional arrangements, whose variability enables the identification of the causal effect of the treatment or relevant regressor on the outcome.
Given such sources of variation, one can use parametric, semiparametric, or nonparametric models to specify the conditional means of the potential outcomes given treatment assignment, thereby constructing counterfactual scenarios and enabling inference on causal estimands, such as the average treatment effect. Particular attention must be paid to the correct specification of the treatment assignment mechanism (propensity score) and the outcome regression, as both play a crucial role in credible causal inference.
This highlights the growing importance of modern machine learning methods for estimating high-dimensional or complex nuisance functions (see Chapter 12 and the last section). At the same time, however, it is essential to address and correct for the biases that machine learning methods may introduce in order to obtain valid causal inference.
Finally, the Bayesian framework provides a coherent way to integrate these identification strategies with prior information and to obtain full posterior distributions of causal estimands, based on both the marginal distributions of the potential outcomes and their joint distribution, thereby offering a unified approach to causal inference.