Journal of Causal Inference (Nov 2023)

All models are wrong, but which are useful? Comparing parametric and nonparametric estimation of causal effects in finite samples

  • Rudolph Kara E.,
  • Williams Nicholas T.,
  • Miles Caleb H.,
  • Antonelli Joseph,
  • Diaz Ivan

DOI
https://doi.org/10.1515/jci-2023-0022
Journal volume & issue
Vol. 11, no. 1
pp. 315 – 31

Abstract

Read online

There is a long-standing debate in the statistical, epidemiological, and econometric fields as to whether nonparametric estimation that uses machine learning in model fitting confers any meaningful advantage over simpler, parametric approaches in finite sample estimation of causal effects. We address the question: when estimating the effect of a treatment on an outcome, how much does the choice of nonparametric vs parametric estimation matter? Instead of answering this question with simulations that reflect a few chosen data scenarios, we propose a novel approach to compare estimators across a large number of data-generating mechanisms drawn from nonparametric models with semi-informative priors. We apply this proposed approach and compare the performance of two nonparametric estimators (Bayesian adaptive regression tree and a targeted minimum loss-based estimator) to two parametric estimators (a logistic regression-based plug-in estimator and a propensity score estimator) in terms of estimating the average treatment effect across thousands of data-generating mechanisms. We summarize performance in terms of bias, confidence interval coverage, and mean squared error. We find that the two nonparametric estimators can substantially reduce bias as compared to the two parametric estimators in large-sample settings characterized by interactions and nonlinearities while compromising very little in terms of performance even in simple, small-sample settings.

Keywords