IDEAS home Printed from https://ideas.repec.org/a/eee/intfor/v37y2021i2p445-460.html
   My bibliography  Save this article

On using predictive-ability tests in the selection of time-series prediction models: A Monte Carlo evaluation

Author

Listed:
  • Costantini, Mauro
  • Kunst, Robert M.

Abstract

To select a forecast model among competing models, researchers often use ex-ante prediction experiments over training samples. Following Diebold and Mariano (1995), forecasters routinely evaluate the relative performance of competing models with accuracy tests and may base their selection on test significance on top of comparing forecast errors. With extensive Monte Carlo analysis, we investigated whether this practice favors simpler models over more complex ones, without gains in forecast accuracy. We simulated the autoregressive moving-average model, the self-exciting threshold autoregressive model, and vector autoregression. We considered two variants of the Diebold–Mariano test, the test by Giacomini and White (2006), the F-test by Clark and McCracken (2001), the Akaike information criterion, and a pure training-sample evaluation. The findings showed some accuracy gains for small samples when applying accuracy tests, particularly for the Clark–McCracken and bootstrapped Diebold–Mariano tests. Evidence against this testing procedure dominated, however, and training-sample evaluations without accuracy tests performed best in many cases.

Suggested Citation

  • Costantini, Mauro & Kunst, Robert M., 2021. "On using predictive-ability tests in the selection of time-series prediction models: A Monte Carlo evaluation," International Journal of Forecasting, Elsevier, vol. 37(2), pages 445-460.
  • Handle: RePEc:eee:intfor:v:37:y:2021:i:2:p:445-460
    DOI: 10.1016/j.ijforecast.2020.06.010
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0169207020301011
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ijforecast.2020.06.010?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to look for a different version below or search for a different version of it.

    Other versions of this item:

    References listed on IDEAS

    as
    1. Raffaella Giacomini & Halbert White, 2006. "Tests of Conditional Predictive Ability," Econometrica, Econometric Society, vol. 74(6), pages 1545-1578, November.
    2. Greg Tkacz & Carolyn Wilkins, 2008. "Linear and threshold forecasts of output and inflation using stock and housing prices," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 27(2), pages 131-151.
    3. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    4. Dendramis, Yiannis & Kapetanios, George & Tzavalis, Elias, 2014. "Level shifts in stock returns driven by large shocks," Journal of Empirical Finance, Elsevier, vol. 29(C), pages 41-51.
    5. Bouwman, Kees E. & Jacobs, Jan P.A.M., 2011. "Forecasting with real-time macroeconomic data: The ragged-edge problem and revisions," Journal of Macroeconomics, Elsevier, vol. 33(4), pages 784-792.
    6. West, Kenneth D, 1996. "Asymptotic Inference about Predictive Ability," Econometrica, Econometric Society, vol. 64(5), pages 1067-1084, September.
    7. Todd Clark & Michael McCracken, 2005. "Evaluating Direct Multistep Forecasts," Econometric Reviews, Taylor & Francis Journals, vol. 24(4), pages 369-404.
    8. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    9. Vuong, Quang H, 1989. "Likelihood Ratio Tests for Model Selection and Non-nested Hypotheses," Econometrica, Econometric Society, vol. 57(2), pages 307-333, March.
    10. Harvey, David I. & Leybourne, Stephen J. & Whitehouse, Emily J., 2017. "Forecast evaluation tests and negative long-run variance estimates in small samples," International Journal of Forecasting, Elsevier, vol. 33(4), pages 833-847.
    11. Francis X. Diebold, 2015. "Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 33(1), pages 1-1, January.
    12. Roberto Savona & Marika Vezzoli, 2015. "Fitting and Forecasting Sovereign Defaults using Multiple Risk Signals," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 77(1), pages 66-92, February.
    13. Andrea Carriero & George Kapetanios & Massimiliano Marcellino, 2011. "Forecasting large datasets with Bayesian reduced rank multivariate models," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 26(5), pages 735-761, August.
    14. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    15. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    16. Mauro Costantini & Robert M. Kunst, 2011. "Combining forecasts based on multiple encompassing tests in a macroeconomic core system," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 30(6), pages 579-596, September.
    17. Hossein Hassani & Emmanuel Sirimal Silva, 2015. "A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts," Econometrics, MDPI, vol. 3(3), pages 1-20, August.
    18. Marcucci Juri, 2005. "Forecasting Stock Market Volatility with Regime-Switching GARCH Models," Studies in Nonlinear Dynamics & Econometrics, De Gruyter, vol. 9(4), pages 1-55, December.
    19. Raffaella Giacomini & Barbara Rossi, 2006. "How Stable is the Forecasting Performance of the Yield Curve for Output Growth?," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 68(s1), pages 783-795, December.
    20. Inoue, Atsushi & Kilian, Lutz, 2006. "On the selection of forecasting models," Journal of Econometrics, Elsevier, vol. 130(2), pages 273-306, February.
    21. Ing, Ching-Kang & Sin, Chor-yiu & Yu, Shu-Hui, 2012. "Model selection for integrated autoregressive processes of infinite order," Journal of Multivariate Analysis, Elsevier, vol. 106(C), pages 57-71.
    22. Lutz Kilian, 1998. "Small-Sample Confidence Intervals For Impulse Response Functions," The Review of Economics and Statistics, MIT Press, vol. 80(2), pages 218-230, May.
    23. Qu, Hui & Duan, Qingling & Niu, Mengyi, 2018. "Modeling the volatility of realized volatility to improve volatility forecasts in electricity markets," Energy Economics, Elsevier, vol. 74(C), pages 767-776.
    24. Yang, Yuhong, 2007. "Prediction/Estimation With Simple Linear Models: Is It Really That Simple?," Econometric Theory, Cambridge University Press, vol. 23(1), pages 1-36, February.
    25. Caldeira, João F. & Moura, Guilherme V. & Santos, André A.P., 2016. "Predicting the yield curve using forecast combinations," Computational Statistics & Data Analysis, Elsevier, vol. 100(C), pages 79-98.
    26. Hendry, David F, 1997. "The Econometrics of Macroeconomic Forecasting," Economic Journal, Royal Economic Society, vol. 107(444), pages 1330-1357, September.
    27. Costantini, Mauro & Pappalardo, Carmine, 2010. "A hierarchical procedure for the combination of forecasts," International Journal of Forecasting, Elsevier, vol. 26(4), pages 725-743, October.
    28. Reschenhofer, Erhard, 1999. "Improved Estimation Of The Expected Kullback–Leibler Discrepancy In Case Of Misspecification," Econometric Theory, Cambridge University Press, vol. 15(3), pages 377-387, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    2. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    3. Jean-Yves Pitarakis, 2020. "A Novel Approach to Predictive Accuracy Testing in Nested Environments," Papers 2008.08387, arXiv.org, revised Oct 2023.
    4. Pincheira, Pablo & Hardy, Nicolas, 2022. "Correlation Based Tests of Predictability," MPRA Paper 112014, University Library of Munich, Germany.
    5. Petropoulos, Fotios & Apiletti, Daniele & Assimakopoulos, Vassilios & Babai, Mohamed Zied & Barrow, Devon K. & Ben Taieb, Souhaib & Bergmeir, Christoph & Bessa, Ricardo J. & Bijak, Jakub & Boylan, Joh, 2022. "Forecasting: theory and practice," International Journal of Forecasting, Elsevier, vol. 38(3), pages 705-871.
      • Fotios Petropoulos & Daniele Apiletti & Vassilios Assimakopoulos & Mohamed Zied Babai & Devon K. Barrow & Souhaib Ben Taieb & Christoph Bergmeir & Ricardo J. Bessa & Jakub Bijak & John E. Boylan & Jet, 2020. "Forecasting: theory and practice," Papers 2012.03854, arXiv.org, revised Jan 2022.
    6. Rossi, Barbara, 2013. "Advances in Forecasting under Instability," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1203-1324, Elsevier.
    7. Pincheira, Pablo M. & West, Kenneth D., 2016. "A comparison of some out-of-sample tests of predictability in iterated multi-step-ahead forecasts," Research in Economics, Elsevier, vol. 70(2), pages 304-319.
    8. Laura Coroneo & Fabrizio Iacone, 2020. "Comparing predictive accuracy in small samples using fixed‐smoothing asymptotics," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 35(4), pages 391-409, June.
    9. Ahmed, Shamim & Liu, Xiaoquan & Valente, Giorgio, 2016. "Can currency-based risk factors help forecast exchange rates?," International Journal of Forecasting, Elsevier, vol. 32(1), pages 75-97.
    10. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 30(3), pages 432-453, April.
    11. Jack Fosten, 2016. "Forecast evaluation with factor-augmented models," University of East Anglia School of Economics Working Paper Series 2016-05, School of Economics, University of East Anglia, Norwich, UK..
    12. Håvard Hungnes, 2020. "Equal predictability test for multi-step-ahead system forecasts invariant to linear transformations," Discussion Papers 931, Statistics Norway, Research Department.
    13. Clark, Todd E. & McCracken, Michael W., 2015. "Nested forecast model comparisons: A new approach to testing equal accuracy," Journal of Econometrics, Elsevier, vol. 186(1), pages 160-177.
    14. Kirstin Hubrich & Kenneth D. West, 2010. "Forecast evaluation of small nested model sets," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 25(4), pages 574-594.
    15. Todd E. Clark & Michael W. McCracken, 2010. "Testing for unconditional predictive ability," Working Papers 2010-031, Federal Reserve Bank of St. Louis.
    16. Clark, Todd E. & McCracken, Michael W., 2009. "Tests of Equal Predictive Ability With Real-Time Data," Journal of Business & Economic Statistics, American Statistical Association, vol. 27(4), pages 441-454.
    17. Pablo Pincheira & Nicolás Hardy & Felipe Muñoz, 2021. "“Go Wild for a While!”: A New Test for Forecast Evaluation in Nested Models," Mathematics, MDPI, vol. 9(18), pages 1-28, September.
    18. Calhoun, Gray, 2014. "Out-Of-Sample Comparisons of Overfit Models," Staff General Research Papers Archive 32462, Iowa State University, Department of Economics.
    19. Filip Staněk, 2023. "Optimal out‐of‐sample forecast evaluation under stationarity," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 42(8), pages 2249-2279, December.
    20. Mayer, Walter J. & Liu, Feng & Dang, Xin, 2017. "Improving the power of the Diebold–Mariano–West test for least squares predictions," International Journal of Forecasting, Elsevier, vol. 33(3), pages 618-626.

    More about this item

    Keywords

    Forecasting; Time series; Predictive accuracy; Model selection; Monte Carlo simulation;
    All these keywords.

    JEL classification:

    • C22 - Mathematical and Quantitative Methods - - Single Equation Models; Single Variables - - - Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Diffusion Processes
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:intfor:v:37:y:2021:i:2:p:445-460. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/ijforecast .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.