IDEAS home Printed from https://ideas.repec.org/p/cer/papers/wp712.html
   My bibliography  Save this paper

Optimal Out-of-Sample Forecast Evaluation under Stationarity

Author

Listed:
  • Filip Stanek

Abstract

It is common practice to split time-series into in-sample and pseudo out-of-sample segments and to estimate the out-of-sample loss of a given statistical model by evaluating forecasting performance over the pseudo out-of-sample segment. We propose an alternative estimator of the out-of-sample loss which, contrary to conventional wisdom, utilizes both measured in-sample and out-of-sample performance via a carefully constructed system of affine weights. We prove that, provided that the time-series is stationary, the proposed estimator is the best linear unbiased estimator of the out-of-sample loss and outperforms the conventional estimator in terms of sampling variance. Applying the optimal estimator to Diebold-Mariano type tests of predictive ability leads to a substantial power gain without worsening finite sample level distortions. An extensive evaluation on real world time-series from the M4 forecasting competition confirms the superiority of the proposed estimator and also demonstrates a substantial robustness to the violation of the underlying assumption of stationarity.

Suggested Citation

  • Filip Stanek, 2021. "Optimal Out-of-Sample Forecast Evaluation under Stationarity," CERGE-EI Working Papers wp712, The Center for Economic Research and Graduate Education - Economics Institute, Prague.
  • Handle: RePEc:cer:papers:wp712
    as

    Download full text from publisher

    File URL: http://www.cerge-ei.cz/pdf/wp/Wp712.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Bergmeir, Christoph & Hyndman, Rob J. & Koo, Bonsoo, 2018. "A note on the validity of cross-validation for evaluating autoregressive time series prediction," Computational Statistics & Data Analysis, Elsevier, vol. 120(C), pages 70-83.
    2. West, Kenneth D, 1996. "Asymptotic Inference about Predictive Ability," Econometrica, Econometric Society, vol. 64(5), pages 1067-1084, September.
    3. Schnaubelt, Matthias, 2019. "A comparison of machine learning model validation schemes for non-stationary time series data," FAU Discussion Papers in Economics 11/2019, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    4. Michael W. McCracken, 2020. "Diverging Tests of Equal Predictive Ability," Econometrica, Econometric Society, vol. 88(4), pages 1753-1754, July.
    5. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    6. Bergmeir, Christoph & Costantini, Mauro & Benítez, José M., 2014. "On the usefulness of cross-validation for directional forecast evaluation," Computational Statistics & Data Analysis, Elsevier, vol. 76(C), pages 132-143.
    7. Eben Lazarus & Daniel J. Lewis & James H. Stock & Mark W. Watson, 2018. "HAR Inference: Recommendations for Practice Rejoinder," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 36(4), pages 574-575, October.
    8. Raffaella Giacomini & Halbert White, 2006. "Tests of Conditional Predictive Ability," Econometrica, Econometric Society, vol. 74(6), pages 1545-1578, November.
    9. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    10. Hyndman, Rob J. & Khandakar, Yeasmin, 2008. "Automatic Time Series Forecasting: The forecast Package for R," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 27(i03).
    11. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    12. Eben Lazarus & Daniel J. Lewis & James H. Stock & Mark W. Watson, 2018. "HAR Inference: Recommendations for Practice," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 36(4), pages 541-559, October.
    13. Makridakis, Spyros & Spiliotis, Evangelos & Assimakopoulos, Vassilios, 2020. "The M4 Competition: 100,000 time series and 61 forecasting methods," International Journal of Forecasting, Elsevier, vol. 36(1), pages 54-74.
    14. Racine, Jeff, 2000. "Consistent cross-validatory model-selection for dependent data: hv-block cross-validation," Journal of Econometrics, Elsevier, vol. 99(1), pages 39-61, November.
    15. G. Elliott & C. Granger & A. Timmermann (ed.), 2013. "Handbook of Economic Forecasting," Handbook of Economic Forecasting, Elsevier, edition 1, volume 2, number 2.
    16. Ibragimov, Rustam & Müller, Ulrich K., 2010. "t-Statistic Based Correlation and Heterogeneity Robust Inference," Journal of Business & Economic Statistics, American Statistical Association, vol. 28(4), pages 453-468.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Filip Staněk, 2023. "Optimal out‐of‐sample forecast evaluation under stationarity," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 42(8), pages 2249-2279, December.
    2. Petropoulos, Fotios & Apiletti, Daniele & Assimakopoulos, Vassilios & Babai, Mohamed Zied & Barrow, Devon K. & Ben Taieb, Souhaib & Bergmeir, Christoph & Bessa, Ricardo J. & Bijak, Jakub & Boylan, Joh, 2022. "Forecasting: theory and practice," International Journal of Forecasting, Elsevier, vol. 38(3), pages 705-871.
      • Fotios Petropoulos & Daniele Apiletti & Vassilios Assimakopoulos & Mohamed Zied Babai & Devon K. Barrow & Souhaib Ben Taieb & Christoph Bergmeir & Ricardo J. Bessa & Jakub Bijak & John E. Boylan & Jet, 2020. "Forecasting: theory and practice," Papers 2012.03854, arXiv.org, revised Jan 2022.
    3. Laura Coroneo & Fabrizio Iacone, 2020. "Comparing predictive accuracy in small samples using fixed‐smoothing asymptotics," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 35(4), pages 391-409, June.
    4. Ahmed, Shamim & Liu, Xiaoquan & Valente, Giorgio, 2016. "Can currency-based risk factors help forecast exchange rates?," International Journal of Forecasting, Elsevier, vol. 32(1), pages 75-97.
    5. Granziera, Eleonora & Sekhposyan, Tatevik, 2019. "Predicting relative forecasting performance: An empirical investigation," International Journal of Forecasting, Elsevier, vol. 35(4), pages 1636-1657.
    6. Pincheira, Pablo & Hardy, Nicolas, 2022. "Correlation Based Tests of Predictability," MPRA Paper 112014, University Library of Munich, Germany.
    7. Timmermann, Allan & Zhu, Yinchu, 2019. "Comparing Forecasting Performance with Panel Data," CEPR Discussion Papers 13746, C.E.P.R. Discussion Papers.
    8. Jamali, Ibrahim & Yamani, Ehab, 2019. "Out-of-sample exchange rate predictability in emerging markets: Fundamentals versus technical analysis," Journal of International Financial Markets, Institutions and Money, Elsevier, vol. 61(C), pages 241-263.
    9. Norman R. Swanson & Weiqi Xiong, 2018. "Big data analytics in economics: What have we learned so far, and where should we go from here?," Canadian Journal of Economics/Revue canadienne d'économique, John Wiley & Sons, vol. 51(3), pages 695-746, August.
    10. Mayer, Walter J. & Liu, Feng & Dang, Xin, 2017. "Improving the power of the Diebold–Mariano–West test for least squares predictions," International Journal of Forecasting, Elsevier, vol. 33(3), pages 618-626.
    11. Odendahl, Florens & Rossi, Barbara & Sekhposyan, Tatevik, 2023. "Evaluating forecast performance with state dependence," Journal of Econometrics, Elsevier, vol. 237(2).
    12. Hirukawa, Masayuki, 2023. "Robust Covariance Matrix Estimation in Time Series: A Review," Econometrics and Statistics, Elsevier, vol. 27(C), pages 36-61.
    13. Barbara Rossi & Atsushi Inoue, 2012. "Out-of-Sample Forecast Tests Robust to the Choice of Window Size," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 30(3), pages 432-453, April.
    14. Dichtl, Hubert & Drobetz, Wolfgang & Neuhierl, Andreas & Wendt, Viktoria-Sophie, 2021. "Data snooping in equity premium prediction," International Journal of Forecasting, Elsevier, vol. 37(1), pages 72-94.
    15. Jin, Sainan & Corradi, Valentina & Swanson, Norman R., 2017. "Robust Forecast Comparison," Econometric Theory, Cambridge University Press, vol. 33(6), pages 1306-1351, December.
    16. Costantini, Mauro & Kunst, Robert M., 2021. "On using predictive-ability tests in the selection of time-series prediction models: A Monte Carlo evaluation," International Journal of Forecasting, Elsevier, vol. 37(2), pages 445-460.
    17. Pincheira, Pablo M. & West, Kenneth D., 2016. "A comparison of some out-of-sample tests of predictability in iterated multi-step-ahead forecasts," Research in Economics, Elsevier, vol. 70(2), pages 304-319.
    18. Zhu, Yinchu & Timmermann, Allan, 2022. "Conditional rotation between forecasting models," Journal of Econometrics, Elsevier, vol. 231(2), pages 329-347.
    19. Busetti, Fabio & Marcucci, Juri, 2013. "Comparing forecast accuracy: A Monte Carlo investigation," International Journal of Forecasting, Elsevier, vol. 29(1), pages 13-27.
    20. Daniel Borup & Jonas N. Eriksen & Mads M. Kjær & Martin Thyrsgaard, 2020. "Predicting bond return predictability," CREATES Research Papers 2020-09, Department of Economics and Business Economics, Aarhus University.

    More about this item

    Keywords

    loss estimation; forecast evaluation; cross-validation; model selection;
    All these keywords.

    JEL classification:

    • C22 - Mathematical and Quantitative Methods - - Single Equation Models; Single Variables - - - Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models; Diffusion Processes
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:cer:papers:wp712. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Lucie Vasiljevova (email available below). General contact details of provider: https://edirc.repec.org/data/eiacacz.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.