Using Model Selection Algorithms To Obtain Reliable Coefficient Estimates
This review surveys a number of common Model Selection Algorithms (MSAs), discusses how they relate to each other, and identifies factors that explain their relative performances. At the heart of MSA performance is the trade-off between Type I and Type II errors. Some relevant variables will be mistakenly excluded, and some irrelevant variables will be retained by chance. A successful MSA will find the optimal trade-off between the two types of errors for a given data environment. Whether a given MSA will be successful in a given environment depends on the relative costs of these two types of errors. We use Monte Carlo experimentation to illustrate these issues. We confirm that no MSA does best in all circumstances. Even the worst MSA in terms of overall performance – the strategy of including all candidate variables – sometimes performs best (viz., when all candidate variables are relevant). We also show how (i) the ratio of relevant to total candidate variables and (ii) DGP noise affect relative MSA performance. Finally, we discuss a number of issues complicating the task of MSAs in producing reliable coefficient estimates.
(This abstract was borrowed from another version of this item.)
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
Volume (Year): 27 (2013)
Issue (Month): 2 (04)
|Contact details of provider:|| Web page: http://www.blackwellpublishing.com/journal.asp?ref=0950-0804|
|Order Information:||Web: http://www.blackwellpublishing.com/subs.asp?ref=0950-0804|
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Castle Jennifer L. & Doornik Jurgen A & Hendry David F., 2011.
"Evaluating Automatic Model Selection,"
Journal of Time Series Econometrics,
De Gruyter, vol. 3(1), pages 1-33, February.
- Jennifer Castle & David Hendry & Jurgen A. Doornik, 2010. "Evaluating Automatic Model Selection," Economics Series Working Papers 474, University of Oxford, Department of Economics.
- Phillips, Peter C.B., 2005. "Automated Discovery In Econometrics," Econometric Theory, Cambridge University Press, vol. 21(01), pages 3-20, February.
- Peter C.B. Phillips, 2004. "Automated Discovery in Econometrics," Cowles Foundation Discussion Papers 1469, Cowles Foundation for Research in Economics, Yale University.
- McAleer, Michael & Pagan, Adrian R & Volker, Paul A, 1985. "What Will Take the Con out of Econometrics?," American Economic Review, American Economic Association, vol. 75(3), pages 293-307, June.
- McAleer, Michael & Pagan, Adrian, 1985. "What Will Take the Con Out of Econometrics?," CEPR Discussion Papers 39, C.E.P.R. Discussion Papers.
- Leeb, Hannes & P tscher, Benedikt M., 2003. "The Finite-Sample Distribution Of Post-Model-Selection Estimators And Uniform Versus Nonuniform Approximations," Econometric Theory, Cambridge University Press, vol. 19(01), pages 100-142, February.
- Hannes Leeb & Benedikt M. Poetscher, 2000. "The Finite-Sample Distribution of Post-Model-Selection Estimators, and Uniform Versus Non-Uniform Approximations," Econometrics 0004001, EconWPA.
- Sune Karlsson & Tor Jacobson, 2004. "Finding good predictors for inflation: a Bayesian model averaging approach," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 23(7), pages 479-496.
- Jacobson, Tor & Karlsson, Sune, 2002. "Finding Good Predictors for Inflation: A Bayesian Model Averaging Approach," Working Paper Series 138, Sveriges Riksbank (Central Bank of Sweden).