IDEAS home Printed from https://ideas.repec.org/p/pen/papers/12-035.html
   My bibliography  Save this paper

Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests

Author

Listed:
  • Francis X. Diebold

    (Department of Economics, University of Pennsylvania)

Abstract

The Diebold-Mariano (DM) test was intended for comparing forecasts; it has been, and remains, useful in that regard. The DM test was not intended for comparing models. Unfortunately, however, much of the large subsequent literature uses DM-type tests for comparing models, in (pseudo-) out-of-sample environments. In that case, much simpler yet more compelling full-sample model comparison procedures exist; they have been, and should continue to be, widely used. The hunch that (pseudo-) out-of-sample analysis is somehow the “only," or “best," or even a “good" way to provide insurance against in-sample over fitting in model comparisons proves largely false. On the other hand, (pseudo-) out-of-sample analysis may be useful for learning about comparative historical predictive performance.

Suggested Citation

  • Francis X. Diebold, 2012. "Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests," PIER Working Paper Archive 12-035, Penn Institute for Economic Research, Department of Economics, University of Pennsylvania.
  • Handle: RePEc:pen:papers:12-035
    as

    Download full text from publisher

    File URL: https://economics.sas.upenn.edu/sites/default/files/filevault/12-035.pdf
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. West, Kenneth D, 1996. "Asymptotic Inference about Predictive Ability," Econometrica, Econometric Society, vol. 64(5), pages 1067-1084, September.
    2. Kilian, Lutz & Taylor, Mark P., 2003. "Why is it so difficult to beat the random walk forecast of exchange rates?," Journal of International Economics, Elsevier, vol. 60(1), pages 85-107, May.
    3. Francis X. Diebold & Roberto S. Mariano, 1991. "Comparing predictive accuracy I: an asymptotic test," Discussion Paper / Institute for Empirical Macroeconomics 52, Federal Reserve Bank of Minneapolis.
    4. Clark, Todd E. & McCracken, Michael W., 2009. "Tests of Equal Predictive Ability With Real-Time Data," Journal of Business & Economic Statistics, American Statistical Association, vol. 27(4), pages 441-454.
    5. Croushore, Dean, 2006. "Forecasting with Real-Time Macroeconomic Data," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 1, chapter 17, pages 961-982, Elsevier.
    6. Sarno,Lucio & Taylor,Mark P., 2003. "The Economics of Exchange Rates," Cambridge Books, Cambridge University Press, number 9780521485845.
    7. Atsushi Inoue & Lutz Kilian, 2005. "In-Sample or Out-of-Sample Tests of Predictability: Which One Should We Use?," Econometric Reviews, Taylor & Francis Journals, vol. 23(4), pages 371-402.
    8. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    9. Hamilton, James D, 1989. "A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle," Econometrica, Econometric Society, vol. 57(2), pages 357-384, March.
    10. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 30(3), pages 432-453, April.
    11. Inoue, Atsushi & Kilian, Lutz, 2006. "On the selection of forecasting models," Journal of Econometrics, Elsevier, vol. 130(2), pages 273-306, February.
    12. David E. Rapach & Jack K. Strauss & Guofu Zhou, 2010. "Out-of-Sample Equity Premium Prediction: Combination Forecasts and Links to the Real Economy," The Review of Financial Studies, Society for Financial Studies, vol. 23(2), pages 821-862, February.
    13. Halbert White, 2000. "A Reality Check for Data Snooping," Econometrica, Econometric Society, vol. 68(5), pages 1097-1126, September.
    14. Li, Tong, 2009. "Simulation based selection of competing structural econometric models," Journal of Econometrics, Elsevier, vol. 148(2), pages 114-123, February.
    15. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    16. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    17. Vuong, Quang H, 1989. "Likelihood Ratio Tests for Model Selection and Non-nested Hypotheses," Econometrica, Econometric Society, vol. 57(2), pages 307-333, March.
    18. John Geweke, 2010. "Complete and Incomplete Econometric Models," Economics Books, Princeton University Press, edition 1, number 9218.
    19. Raffaella Giacomini & Barbara Rossi, 2010. "Forecast comparisons in unstable environments," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 25(4), pages 595-620.
    20. Douglas Rivers & Quang Vuong, 2002. "Model selection tests for nonlinear dynamic models," Econometrics Journal, Royal Economic Society, vol. 5(1), pages 1-39, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Petropoulos, Fotios & Apiletti, Daniele & Assimakopoulos, Vassilios & Babai, Mohamed Zied & Barrow, Devon K. & Ben Taieb, Souhaib & Bergmeir, Christoph & Bessa, Ricardo J. & Bijak, Jakub & Boylan, Joh, 2022. "Forecasting: theory and practice," International Journal of Forecasting, Elsevier, vol. 38(3), pages 705-871.
      • Fotios Petropoulos & Daniele Apiletti & Vassilios Assimakopoulos & Mohamed Zied Babai & Devon K. Barrow & Souhaib Ben Taieb & Christoph Bergmeir & Ricardo J. Bessa & Jakub Bijak & John E. Boylan & Jet, 2020. "Forecasting: theory and practice," Papers 2012.03854, arXiv.org, revised Jan 2022.
    2. Galvão, Ana Beatriz, 2013. "Changes in predictive ability with mixed frequency data," International Journal of Forecasting, Elsevier, vol. 29(3), pages 395-410.
    3. Rossi, Barbara, 2013. "Advances in Forecasting under Instability," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1203-1324, Elsevier.
    4. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    5. Raffaella Giacomini & Barbara Rossi, 2013. "Forecasting in macroeconomics," Chapters, in: Nigar Hashimzade & Michael A. Thornton (ed.), Handbook of Research Methods and Applications in Empirical Macroeconomics, chapter 17, pages 381-408, Edward Elgar Publishing.
    6. Rapach, David & Zhou, Guofu, 2013. "Forecasting Stock Returns," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 328-383, Elsevier.
    7. Dichtl, Hubert & Drobetz, Wolfgang & Neuhierl, Andreas & Wendt, Viktoria-Sophie, 2021. "Data snooping in equity premium prediction," International Journal of Forecasting, Elsevier, vol. 37(1), pages 72-94.
    8. Brooks, Chris & Burke, Simon P. & Stanescu, Silvia, 2016. "Finite sample weighting of recursive forecast errors," International Journal of Forecasting, Elsevier, vol. 32(2), pages 458-474.
    9. Todd E. Clark & Michael W. McCracken, 2002. "Forecast-based model selection in the presence of structural breaks," Research Working Paper RWP 02-05, Federal Reserve Bank of Kansas City.
    10. Todd E. Clark & Michael W. McCracken, 2010. "Reality checks and nested forecast model comparisons," Working Papers 2010-032, Federal Reserve Bank of St. Louis.
    11. Todd E. Clark & Michael W. McCracken, 2010. "Testing for unconditional predictive ability," Working Papers 2010-031, Federal Reserve Bank of St. Louis.
    12. Peter Reinhard Hansen & Allan Timmermann, 2012. "Choice of Sample Split in Out-of-Sample Forecast Evaluation," CREATES Research Papers 2012-43, Department of Economics and Business Economics, Aarhus University.
    13. Barbara Rossi, 2013. "Exchange Rate Predictability," Journal of Economic Literature, American Economic Association, vol. 51(4), pages 1063-1119, December.
    14. Clarida, Richard H. & Sarno, Lucio & Taylor, Mark P. & Valente, Giorgio, 2003. "The out-of-sample success of term structure models as exchange rate predictors: a step beyond," Journal of International Economics, Elsevier, vol. 60(1), pages 61-83, May.
    15. Atsushi Inoue & Lutz Kilian, 2005. "In-Sample or Out-of-Sample Tests of Predictability: Which One Should We Use?," Econometric Reviews, Taylor & Francis Journals, vol. 23(4), pages 371-402.
    16. West, Kenneth D., 2006. "Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 1, chapter 3, pages 99-134, Elsevier.
    17. Wu, Jyh-Lin & Hu, Yu-Hau, 2009. "New evidence on nominal exchange rate predictability," Journal of International Money and Finance, Elsevier, vol. 28(6), pages 1045-1063, October.
    18. Barbara Rossi, 2019. "Forecasting in the presence of instabilities: How do we know whether models predict well and how to improve them," Economics Working Papers 1711, Department of Economics and Business, Universitat Pompeu Fabra, revised Jul 2021.
    19. Kirstin Hubrich & Kenneth D. West, 2010. "Forecast evaluation of small nested model sets," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 25(4), pages 574-594.
    20. Odendahl, Florens & Rossi, Barbara & Sekhposyan, Tatevik, 2023. "Evaluating forecast performance with state dependence," Journal of Econometrics, Elsevier, vol. 237(2).

    More about this item

    Keywords

    : Forecasting; model comparison; model selection; out-of-sample tests;
    All these keywords.

    JEL classification:

    • C01 - Mathematical and Quantitative Methods - - General - - - Econometrics
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:pen:papers:12-035. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Administrator (email available below). General contact details of provider: https://edirc.repec.org/data/deupaus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.