IDEAS home Printed from https://ideas.repec.org/
MyIDEAS: Login to save this paper

Out-Of-Sample Comparisons of Overfit Models

  • Calhoun, Gray

This paper uses dimension asymptotics to study why overfit linear regression models should be compared out-of-sample; we let the number of predictors used by the larger model increase with the number of observations so that their ratio remains uniformly positive. Our analysis gives a theoretical motivation for using out-of-sample (OOS) comparisons: the DMW OOS test allows a forecaster to conduct inference about the expected future accuracy of his or her models when one or both is overfit. We show analytically and through Monte Carlo that standard full-sample test statistics can not test hypotheses about this performance. Our paper also shows that popular test and training sample sizes may give misleading results if researchers are concerned about overfit. We show that P 2 /T must converge to zero for theDMW test to give valid inference about the expected forecast accuracy, otherwise the test measures the accuracy of the estimates constructed using only the training sample. In empirical research, P is typically much larger than this. Our simulations indicate that using large values of P with the DMW test gives undersized tests with low power, so this practice may favor simple benchmark models too much.

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.

File URL: http://www2.econ.iastate.edu/papers/p12462-2014-03-28.pdf
Download Restriction: no

Paper provided by Iowa State University, Department of Economics in its series Staff General Research Papers with number 32462.

as
in new window

Length:
Date of creation: 28 Mar 2014
Date of revision:
Handle: RePEc:isu:genres:32462
Contact details of provider: Postal:
Iowa State University, Dept. of Economics, 260 Heady Hall, Ames, IA 50011-1070

Phone: +1 515.294.6741
Fax: +1 515.294.0221
Web page: http://www.econ.iastate.edu
Email:


More information through EDIRC

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:

as in new window
  1. Rossi, Barbara & Giacomini, Raffaella, 2006. "Detecting and Predicting Forecast Breakdowns," Working Papers 06-01, Duke University, Department of Economics.
  2. Raffaella Giacomini & Halbert White, 2006. "Tests of Conditional Predictive Ability," Econometrica, Econometric Society, vol. 74(6), pages 1545-1578, November.
  3. Inoue, Atsushi & Kilian, Lutz, 2002. "In-Sample or Out-of-Sample Tests of Predictability: Which One Should We Use?," CEPR Discussion Papers 3671, C.E.P.R. Discussion Papers.
  4. McCracken, Michael W., 2007. "Asymptotics for out of sample tests of Granger causality," Journal of Econometrics, Elsevier, vol. 140(2), pages 719-752, October.
  5. Inoue, Atsushi & Kilian, Lutz, 2003. "On the Selection of Forecasting Models," CEPR Discussion Papers 3809, C.E.P.R. Discussion Papers.
  6. Todd E. Clark & Michael W. McCracken, 2009. "In-sample tests of predictive ability: a new approach," Research Working Paper RWP 09-10, Federal Reserve Bank of Kansas City.
  7. Stanislav Anatolyev, 2007. "Inference about predictive ability when there are many predictors," Working Papers w0096, Center for Economic and Financial Research (CEFIR).
  8. Robert M. De Jong & James Davidson, 2000. "Consistency of Kernel Estimators of Heteroscedastic and Autocorrelated Covariance Matrices," Econometrica, Econometric Society, vol. 68(2), pages 407-424, March.
  9. Todd E. Clark & Kenneth D. West, 2005. "Approximately normal tests for equal predictive accuracy in nested models," Research Working Paper RWP 05-05, Federal Reserve Bank of Kansas City.
  10. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-44, January.
  11. Amit Goyal & Ivo Welch, 2004. "A Comprehensive Look at the Empirical Performance of Equity Premium Prediction," Yale School of Management Working Papers amz2412, Yale School of Management, revised 01 Jan 2006.
  12. Calhoun, Gray, 2010. "Hypothesis Testing in Linear Regression when K/N is Large," Staff General Research Papers 32216, Iowa State University, Department of Economics.
  13. Clark, Todd E. & McCracken, Michael W., 2015. "Nested forecast model comparisons: A new approach to testing equal accuracy," Journal of Econometrics, Elsevier, vol. 186(1), pages 160-177.
  14. Todd E. Clark & Kenneth D. West, 2004. "Using out-of-sample mean squared prediction errors to test the Martingale difference hypothesis," Research Working Paper RWP 04-03, Federal Reserve Bank of Kansas City.
  15. Todd E. Clark & Michael McCracken, 1999. "Tests of Equal Forecast Accuracy and Encompassing for Nested Models," Computing in Economics and Finance 1999 1241, Society for Computational Economics.
  16. Corradi, Valentina & Swanson, Norman R., 2004. "Some recent developments in predictive accuracy testing with nested models and (generic) nonlinear alternatives," International Journal of Forecasting, Elsevier, vol. 20(2), pages 185-199.
  17. repec:jss:jstsof:11:i10 is not listed on IDEAS
  18. de Jong, Robert M., 1997. "Central Limit Theorems for Dependent Heterogeneous Random Variables," Econometric Theory, Cambridge University Press, vol. 13(03), pages 353-367, June.
  19. Todd Clark & Michael McCracken, 2005. "Evaluating Direct Multistep Forecasts," Econometric Reviews, Taylor & Francis Journals, vol. 24(4), pages 369-404.
Full references (including those not matched with items on IDEAS)

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

When requesting a correction, please mention this item's handle: RePEc:isu:genres:32462. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Curtis Balmer)

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.

This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.