IDEAS home Printed from https://ideas.repec.org/p/nbr/nberwo/18391.html
   My bibliography  Save this paper

Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests

Author

Listed:
  • Francis X. Diebold

Abstract

The Diebold-Mariano (DM) test was intended for comparing forecasts; it has been, and remains, useful in that regard. The DM test was not intended for comparing models. Unfortunately, however, much of the large subsequent literature uses DM-type tests for comparing models, in (pseudo-) out-of-sample environments. In that case, much simpler yet more compelling full-sample model comparison procedures exist; they have been, and should continue to be, widely used. The hunch that (pseudo-) out-of-sample analysis is somehow the "only," or "best," or even a "good" way to provide insurance against in-sample over-fitting in model comparisons proves largely false. On the other hand, (pseudo-) out-of-sample analysis may be useful for learning about comparative historical predictive performance.

Suggested Citation

  • Francis X. Diebold, 2012. "Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests," NBER Working Papers 18391, National Bureau of Economic Research, Inc.
  • Handle: RePEc:nbr:nberwo:18391 Note: AP EFG IFM TWP
    as

    Download full text from publisher

    File URL: http://www.nber.org/papers/w18391.pdf
    Download Restriction: no

    Other versions of this item:

    References listed on IDEAS

    as
    1. Li, Tong, 2009. "Simulation based selection of competing structural econometric models," Journal of Econometrics, Elsevier, vol. 148(2), pages 114-123, February.
    2. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, pages 85-110.
    3. Atsushi Inoue & Lutz Kilian, 2005. "In-Sample or Out-of-Sample Tests of Predictability: Which One Should We Use?," Econometric Reviews, Taylor & Francis Journals, vol. 23(4), pages 371-402.
    4. Kilian, Lutz & Taylor, Mark P., 2003. "Why is it so difficult to beat the random walk forecast of exchange rates?," Journal of International Economics, Elsevier, pages 85-107.
    5. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    6. Vuong, Quang H, 1989. "Likelihood Ratio Tests for Model Selection and Non-nested Hypotheses," Econometrica, Econometric Society, vol. 57(2), pages 307-333, March.
    7. Clark, Todd E. & McCracken, Michael W., 2009. "Tests of Equal Predictive Ability With Real-Time Data," Journal of Business & Economic Statistics, American Statistical Association, pages 441-454.
    8. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, Elsevier.
    9. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, pages 432-453.
    10. Inoue, Atsushi & Kilian, Lutz, 2006. "On the selection of forecasting models," Journal of Econometrics, Elsevier, vol. 130(2), pages 273-306, February.
    11. David E. Rapach & Jack K. Strauss & Guofu Zhou, 2010. "Out-of-Sample Equity Premium Prediction: Combination Forecasts and Links to the Real Economy," Review of Financial Studies, Society for Financial Studies, vol. 23(2), pages 821-862, February.
    12. Douglas Rivers & Quang Vuong, 2002. "Model selection tests for nonlinear dynamic models," Econometrics Journal, Royal Economic Society, vol. 5(1), pages 1-39, June.
    Full references (including those not matched with items on IDEAS)

    More about this item

    JEL classification:

    • C01 - Mathematical and Quantitative Methods - - General - - - Econometrics
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:18391. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (). General contact details of provider: http://edirc.repec.org/data/nberrus.html .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.