Propensity Score Methods for Causal Inference: On the Relative Importance of Covariate Selection, Reliable Measurement, and Choice of Propensity Score Technique
The popularity of propensity score (PS) methods for estimating causal treatment effects from observational studies has increased during the past decades. However, the success of these methods in removing selection bias mainly rests on strong assumptions, like the strong ignorability assumption, and the competent implementation of a specific propensity score technique. After giving a brief introduction to the Rubin Causal Model and different types of propensity score techniques, the paper assess the relative importance of three factors in removing selection bias in practice: (i) The availability of covariates that are related to both the selection process and the outcome under investigation; (ii) The reliability of the covariates’ measurements; And (iii) the choice of a specific analytic method for estimating the treatment effect—either a specific propensity score technique (PS matching, PS stratification, inverse-propensity weighting, and PS regression adjustment) or standard regression approaches. The importance of these three factors is investigated by reviewing different within-study comparisons and meta-analyses. Within-study comparisons enable an empirical assessment of PS methods’ performance in removing selection bias since they contrast the estimated treatment effect from an observational study with an estimate from a corresponding randomized experiment. The empirical evidence indicates that the selection of covariates counts most in reducing selection bias, their reliable measurement next most, and the mode of data analysis—either a specific propensity score technique or standard regression—is of least importance. Additional evidence suggests that the crucial strong ignorability assumption is most likely met if pretest measures of the outcome or constructs that directly determine the selection process are available and reliably measured.
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- David S. Lee & Thomas Lemieux, 2009.
"Regression Discontinuity Designs in Economics,"
NBER Working Papers
14723, National Bureau of Economic Research, Inc.
- Heller, Ruth & Rosenbaum, Paul R. & Small, Dylan S., 2009. "Split Samples and Design Sensitivity in Observational Studies," Journal of the American Statistical Association, American Statistical Association, vol. 104(487), pages 1090-1101.
- LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-20, September.
- Heckman, James, 2013.
"Sample selection bias as a specification error,"
Publishing House "SINERGIA PRESS", vol. 31(3), pages 129-137.
- Guido W. Imbens, 2003.
"Nonparametric Estimation of Average Treatment Effects under Exogeneity: A Review,"
NBER Technical Working Papers
0294, National Bureau of Economic Research, Inc.
- Guido W. Imbens, 2004. "Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 4-29, February.
- Heckman, James J, 1974. "Shadow Prices, Market Wages, and Labor Supply," Econometrica, Econometric Society, vol. 42(4), pages 679-94, July.
- Shadish, William R. & Clark, M. H. & Steiner, Peter M., 2008. "Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1334-1344.
- Steven Glazerman Dan Levy David Myers, 2003. "Nonexperimental Versus Experimental Estimates of Earnings Impacts," Mathematica Policy Research Reports 7c8bd68ac8db47caa57c70ee1, Mathematica Policy Research.
- Juan Jose Diaz & Sudhanshu Handa, 2006.
"An Assessment of Propensity Score Matching as a Nonexperimental Impact Estimator: Evidence from Mexico’s PROGRESA Program,"
Journal of Human Resources,
University of Wisconsin Press, vol. 41(2).
- Juan José Díaz & Sudhanshu Handa, 2005. "An Assessment of Propensity Score Matching as a Non Experimental Impact Estimator: Evidence from Mexico's PROGRESA Program," IDB Publications (Working Papers) 25418, Inter-American Development Bank.
- Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750.
When requesting a correction, please mention this item's handle: RePEc:laa:wpaper:09. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ()
If references are entirely missing, you can add them using this form.