IDEAS home Printed from https://ideas.repec.org/p/ihs/ihsesp/341.html
   My bibliography  Save this paper

On Using Predictive-ability Tests in the Selection of Time-series Prediction Models: A Monte Carlo Evaluation

Author

Listed:
  • Costantini, Mauro

    (Department of Economics and Finance, Brunel University, London)

  • Kunst, Robert M.

    (Institute for Advanced Studies, Vienna, and University of Vienna)

Abstract

Comparative ex-ante prediction experiments over expanding subsamples are a popular tool for the task of selecting the best forecasting model class in finite samples of practical relevance. Flanking such a horse race by predictive-accuracy tests,such as the test by Diebold and Mariano (1995), tends to increase support for the simpler structure. We are concerned with the question whether such simplicity boosting actually benefits predictive accuracy in finite samples. We consider two variants of the DM test, one with naive normal critical values and one with bootstrapped critical values, the predictive-ability test by Giacomini and White (2006), which continues to be valid in nested problems, the F test by Clark and McCracken (2001), and also model selection via the AIC as a benchmark strategy. Our Monte Carlo simulations focus on basic univariate time-series specifications, such as linear (ARMA) and nonlinear (SETAR) generating processes.

Suggested Citation

  • Costantini, Mauro & Kunst, Robert M., 2018. "On Using Predictive-ability Tests in the Selection of Time-series Prediction Models: A Monte Carlo Evaluation," Economics Series 341, Institute for Advanced Studies.
  • Handle: RePEc:ihs:ihsesp:341
    as

    Download full text from publisher

    File URL: https://irihs.ihs.ac.at/id/eprint/4712
    File Function: First version, 2018
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Raffaella Giacomini & Halbert White, 2006. "Tests of Conditional Predictive Ability," Econometrica, Econometric Society, vol. 74(6), pages 1545-1578, November.
    2. Greg Tkacz & Carolyn Wilkins, 2008. "Linear and threshold forecasts of output and inflation using stock and housing prices," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 27(2), pages 131-151.
    3. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    4. Dendramis, Yiannis & Kapetanios, George & Tzavalis, Elias, 2014. "Level shifts in stock returns driven by large shocks," Journal of Empirical Finance, Elsevier, vol. 29(C), pages 41-51.
    5. Bouwman, Kees E. & Jacobs, Jan P.A.M., 2011. "Forecasting with real-time macroeconomic data: The ragged-edge problem and revisions," Journal of Macroeconomics, Elsevier, vol. 33(4), pages 784-792.
    6. West, Kenneth D, 1996. "Asymptotic Inference about Predictive Ability," Econometrica, Econometric Society, vol. 64(5), pages 1067-1084, September.
    7. Todd Clark & Michael McCracken, 2005. "Evaluating Direct Multistep Forecasts," Econometric Reviews, Taylor & Francis Journals, vol. 24(4), pages 369-404.
    8. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    9. Vuong, Quang H, 1989. "Likelihood Ratio Tests for Model Selection and Non-nested Hypotheses," Econometrica, Econometric Society, vol. 57(2), pages 307-333, March.
    10. Harvey, David I. & Leybourne, Stephen J. & Whitehouse, Emily J., 2017. "Forecast evaluation tests and negative long-run variance estimates in small samples," International Journal of Forecasting, Elsevier, vol. 33(4), pages 833-847.
    11. Francis X. Diebold, 2015. "Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 33(1), pages 1-1, January.
    12. Roberto Savona & Marika Vezzoli, 2015. "Fitting and Forecasting Sovereign Defaults using Multiple Risk Signals," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 77(1), pages 66-92, February.
    13. Andrea Carriero & George Kapetanios & Massimiliano Marcellino, 2011. "Forecasting large datasets with Bayesian reduced rank multivariate models," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 26(5), pages 735-761, August.
    14. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    15. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    16. Mauro Costantini & Robert M. Kunst, 2011. "Combining forecasts based on multiple encompassing tests in a macroeconomic core system," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 30(6), pages 579-596, September.
    17. Hossein Hassani & Emmanuel Sirimal Silva, 2015. "A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts," Econometrics, MDPI, vol. 3(3), pages 1-20, August.
    18. Marcucci Juri, 2005. "Forecasting Stock Market Volatility with Regime-Switching GARCH Models," Studies in Nonlinear Dynamics & Econometrics, De Gruyter, vol. 9(4), pages 1-55, December.
    19. Raffaella Giacomini & Barbara Rossi, 2006. "How Stable is the Forecasting Performance of the Yield Curve for Output Growth?," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 68(s1), pages 783-795, December.
    20. Inoue, Atsushi & Kilian, Lutz, 2006. "On the selection of forecasting models," Journal of Econometrics, Elsevier, vol. 130(2), pages 273-306, February.
    21. Ing, Ching-Kang & Sin, Chor-yiu & Yu, Shu-Hui, 2012. "Model selection for integrated autoregressive processes of infinite order," Journal of Multivariate Analysis, Elsevier, vol. 106(C), pages 57-71.
    22. Lutz Kilian, 1998. "Small-Sample Confidence Intervals For Impulse Response Functions," The Review of Economics and Statistics, MIT Press, vol. 80(2), pages 218-230, May.
    23. Qu, Hui & Duan, Qingling & Niu, Mengyi, 2018. "Modeling the volatility of realized volatility to improve volatility forecasts in electricity markets," Energy Economics, Elsevier, vol. 74(C), pages 767-776.
    24. Yang, Yuhong, 2007. "Prediction/Estimation With Simple Linear Models: Is It Really That Simple?," Econometric Theory, Cambridge University Press, vol. 23(1), pages 1-36, February.
    25. Caldeira, João F. & Moura, Guilherme V. & Santos, André A.P., 2016. "Predicting the yield curve using forecast combinations," Computational Statistics & Data Analysis, Elsevier, vol. 100(C), pages 79-98.
    26. Hendry, David F, 1997. "The Econometrics of Macroeconomic Forecasting," Economic Journal, Royal Economic Society, vol. 107(444), pages 1330-1357, September.
    27. Costantini, Mauro & Pappalardo, Carmine, 2010. "A hierarchical procedure for the combination of forecasts," International Journal of Forecasting, Elsevier, vol. 26(4), pages 725-743, October.
    28. Reschenhofer, Erhard, 1999. "Improved Estimation Of The Expected Kullback–Leibler Discrepancy In Case Of Misspecification," Econometric Theory, Cambridge University Press, vol. 15(3), pages 377-387, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    2. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    3. Jean-Yves Pitarakis, 2020. "A Novel Approach to Predictive Accuracy Testing in Nested Environments," Papers 2008.08387, arXiv.org, revised Oct 2023.
    4. Rossi, Barbara, 2013. "Advances in Forecasting under Instability," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1203-1324, Elsevier.
    5. Pincheira, Pablo & Hardy, Nicolas, 2022. "Correlation Based Tests of Predictability," MPRA Paper 112014, University Library of Munich, Germany.
    6. Pincheira, Pablo M. & West, Kenneth D., 2016. "A comparison of some out-of-sample tests of predictability in iterated multi-step-ahead forecasts," Research in Economics, Elsevier, vol. 70(2), pages 304-319.
    7. Laura Coroneo & Fabrizio Iacone, 2020. "Comparing predictive accuracy in small samples using fixed‐smoothing asymptotics," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 35(4), pages 391-409, June.
    8. Petropoulos, Fotios & Apiletti, Daniele & Assimakopoulos, Vassilios & Babai, Mohamed Zied & Barrow, Devon K. & Ben Taieb, Souhaib & Bergmeir, Christoph & Bessa, Ricardo J. & Bijak, Jakub & Boylan, Joh, 2022. "Forecasting: theory and practice," International Journal of Forecasting, Elsevier, vol. 38(3), pages 705-871.
      • Fotios Petropoulos & Daniele Apiletti & Vassilios Assimakopoulos & Mohamed Zied Babai & Devon K. Barrow & Souhaib Ben Taieb & Christoph Bergmeir & Ricardo J. Bessa & Jakub Bijak & John E. Boylan & Jet, 2020. "Forecasting: theory and practice," Papers 2012.03854, arXiv.org, revised Jan 2022.
    9. Ahmed, Shamim & Liu, Xiaoquan & Valente, Giorgio, 2016. "Can currency-based risk factors help forecast exchange rates?," International Journal of Forecasting, Elsevier, vol. 32(1), pages 75-97.
    10. Mayer, Walter J. & Liu, Feng & Dang, Xin, 2017. "Improving the power of the Diebold–Mariano–West test for least squares predictions," International Journal of Forecasting, Elsevier, vol. 33(3), pages 618-626.
    11. Todd E. Clark & Michael W. Mccracken, 2014. "Tests Of Equal Forecast Accuracy For Overlapping Models," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 29(3), pages 415-430, April.
    12. Clark, Todd E. & McCracken, Michael W., 2015. "Nested forecast model comparisons: A new approach to testing equal accuracy," Journal of Econometrics, Elsevier, vol. 186(1), pages 160-177.
    13. Kirstin Hubrich & Kenneth D. West, 2010. "Forecast evaluation of small nested model sets," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 25(4), pages 574-594.
    14. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 30(3), pages 432-453, April.
    15. Brooks, Chris & Burke, Simon P. & Stanescu, Silvia, 2016. "Finite sample weighting of recursive forecast errors," International Journal of Forecasting, Elsevier, vol. 32(2), pages 458-474.
    16. Clark, Todd E. & McCracken, Michael W., 2009. "Tests of Equal Predictive Ability With Real-Time Data," Journal of Business & Economic Statistics, American Statistical Association, vol. 27(4), pages 441-454.
    17. Jack Fosten, 2016. "Forecast evaluation with factor-augmented models," University of East Anglia School of Economics Working Paper Series 2016-05, School of Economics, University of East Anglia, Norwich, UK..
    18. Francis X. Diebold, 2015. "Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 33(1), pages 1-1, January.
    19. Håvard Hungnes, 2020. "Equal predictability test for multi-step-ahead system forecasts invariant to linear transformations," Discussion Papers 931, Statistics Norway, Research Department.
    20. Todd E. Clark & Michael W. McCracken, 2010. "Testing for unconditional predictive ability," Working Papers 2010-031, Federal Reserve Bank of St. Louis.

    More about this item

    Keywords

    Forecasting; time series; predictive accuracy; model selection;
    All these keywords.

    JEL classification:

    • C22 - Mathematical and Quantitative Methods - - Single Equation Models; Single Variables - - - Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Diffusion Processes
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ihs:ihsesp:341. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Doris Szoncsitz (email available below). General contact details of provider: https://edirc.repec.org/data/deihsat.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.