Advanced Search
MyIDEAS: Login

Out-Of-Sample Comparisons of Overfit Models

Contents:

Author Info

  • Calhoun, Gray

Abstract

This paper uses dimension asymptotics to study why overfit linear regression models should be compared out-of-sample; we let the number of predictors used by the larger model increase with the number of observations so that their ratio remains uniformly positive. Our analysis gives a theoretical motivation for using out-of-sample (OOS) comparisons: the DMW OOS test allows a forecaster to conduct inference about the expected future accuracy of his or her models when one or both is overfit. We show analytically and through Monte Carlo that standard full-sample test statistics can not test hypotheses about this performance. Our paper also shows that popular test and training sample sizes may give misleading results if researchers are concerned about overfit. We show that P 2 /T must converge to zero for theDMW test to give valid inference about the expected forecast accuracy, otherwise the test measures the accuracy of the estimates constructed using only the training sample. In empirical research, P is typically much larger than this. Our simulations indicate that using large values of P with the DMW test gives undersized tests with low power, so this practice may favor simple benchmark models too much.

Download Info

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
File URL: http://www.econ.iastate.edu/sites/default/files/publications/papers/p12462-2014-03-28.pdf
Download Restriction: no

Bibliographic Info

Paper provided by Iowa State University, Department of Economics in its series Staff General Research Papers with number 32462.

as in new window
Length:
Date of creation: 28 Mar 2014
Date of revision:
Handle: RePEc:isu:genres:32462

Contact details of provider:
Postal: Iowa State University, Dept. of Economics, 260 Heady Hall, Ames, IA 50011-1070
Phone: +1 515.294.6741
Fax: +1 515.294.0221
Email:
Web page: http://www.econ.iastate.edu
More information through EDIRC

Related research

Keywords: Generalization Error; Forecasting; ModelSelection; t-test; Dimension Asymptotics;

Find related papers by JEL classification:

This paper has been announced in the following NEP Reports:

References

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
as in new window
  1. McCracken, Michael W., 2007. "Asymptotics for out of sample tests of Granger causality," Journal of Econometrics, Elsevier, vol. 140(2), pages 719-752, October.
  2. Valentina Corradi & Norman Swanson, 2003. "Some Recent Developments in Predictive Accuracy Testing With Nested Models and (Generic) Nonlinear Alternatives," Departmental Working Papers 200316, Rutgers University, Department of Economics.
  3. Raffella Giacomini & Barbara Rossi, 2005. "Detecting and Predicting Forecast Breakdowns," UCLA Economics Working Papers 845, UCLA Department of Economics.
  4. Todd E. Clark & Michael W. McCracken, 2009. "In-sample tests of predictive ability: a new approach," Research Working Paper RWP 09-10, Federal Reserve Bank of Kansas City.
  5. Diebold, Francis X & Mariano, Roberto S, 1995. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 13(3), pages 253-63, July.
  6. Inoue, Atsushi & Kilian, Lutz, 2002. "In-Sample or Out-of-Sample Tests of Predictability: Which One Should We Use?," CEPR Discussion Papers 3671, C.E.P.R. Discussion Papers.
  7. Kenneth D. West & Todd Clark, 2006. "Approximately Normal Tests for Equal Predictive Accuracy in Nested Models," NBER Technical Working Papers 0326, National Bureau of Economic Research, Inc.
  8. Todd Clark & Michael McCracken, 2005. "Evaluating Direct Multistep Forecasts," Econometric Reviews, Taylor & Francis Journals, vol. 24(4), pages 369-404.
  9. Todd E. Clark & Michael W. McCracken, 1999. "Tests of equal forecast accuracy and encompassing for nested models," Research Working Paper 99-11, Federal Reserve Bank of Kansas City.
  10. Inoue, Atsushi & Kilian, Lutz, 2006. "On the selection of forecasting models," Journal of Econometrics, Elsevier, vol. 130(2), pages 273-306, February.
  11. Todd E. Clark & Kenneth D. West, 2004. "Using out-of-sample mean squared prediction errors to test the Martingale difference hypothesis," Research Working Paper RWP 04-03, Federal Reserve Bank of Kansas City.
  12. Raffaella Giacomini & Halbert White, 2003. "Tests of Conditional Predictive Ability," Econometrics 0308001, EconWPA.
  13. Todd E. Clark & Michael W. McCracken, 2009. "Nested forecast model comparisons: a new approach to testing equal accuracy," Research Working Paper RWP 09-11, Federal Reserve Bank of Kansas City.
  14. de Jong, Robert M., 1997. "Central Limit Theorems for Dependent Heterogeneous Random Variables," Econometric Theory, Cambridge University Press, vol. 13(03), pages 353-367, June.
  15. Calhoun, Gray, 2010. "Hypothesis Testing in Linear Regression when K/N is Large," Staff General Research Papers 32216, Iowa State University, Department of Economics.
  16. Jong, R.M. de & Davidson, J., 1996. "Consistency of Kernel Estimators of Heteroscedastic and Autocorrelated Covariance Matrices," Discussion Paper 1996-52, Tilburg University, Center for Economic Research.
  17. Ivo Welch & Amit Goyal, 2008. "A Comprehensive Look at The Empirical Performance of Equity Premium Prediction," Review of Financial Studies, Society for Financial Studies, vol. 21(4), pages 1455-1508, July.
  18. Stanislav Anatolyev, 2007. "Inference about predictive ability when there are many predictors," Working Papers w0096, Center for Economic and Financial Research (CEFIR).
  19. Achim Zeileis, . "Econometric Computing with HC and HAC Covariance Matrix Estimators," Journal of Statistical Software, American Statistical Association, vol. 11(i10).
Full references (including those not matched with items on IDEAS)

Citations

Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as in new window

Cited by:
  1. Todd E. Clark & Michael W. McCracken, 2011. "Advances in forecast evaluation," Working Papers 2011-025, Federal Reserve Bank of St. Louis.
  2. Travis J. Berge, 2011. "Forecasting disconnected exchange rates," Research Working Paper RWP 11-12, Federal Reserve Bank of Kansas City.
  3. Barbara Rossi, 2011. "Advances in Forecasting Under Instability," Working Papers 11-20, Duke University, Department of Economics.

Lists

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

Statistics

Access and download statistics

Corrections

When requesting a correction, please mention this item's handle: RePEc:isu:genres:32462. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Stephanie Bridges) The email address of this maintainer does not seem to be valid anymore. Please ask Stephanie Bridges to update the entry or send us the correct address.

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.