IDEAS home Printed from https://ideas.repec.org/p/ucm/doicae/1422.html
   My bibliography  Save this paper

Parameter Estimation Error in Tests of Predictive Performance under Discrete Loss Functions

Author

Listed:

Abstract

We analyze the effect of parameter estimation error on the size of unconditional population level tests of predictive ability when they are implemented under a class of loss functions we refer to as ‘discrete functions’. The analysis is restricted to linear models in stationary variables. We obtain analytical results for no nested models guaranteeing asymptotic irrelevance of parameter estimation error under a plausible predictive environment and three subsets of discrete loss functions that seem quite appropriate for many economic applications. For nested models, we provide some Monte Carlo evidence suggesting that the asymptotic distribution of the Diebold and Mariano (1995) test is relatively robust to parameter estimation error in many cases if it is implemented under discrete loss functions, unlike what happens under the squared forecast error or the absolute value error loss functions.

Suggested Citation

  • Francisco Javier Eransus & Alfonso Novales Cinca, 2014. "Parameter Estimation Error in Tests of Predictive Performance under Discrete Loss Functions," Documentos de Trabajo del ICAE 2014-22, Universidad Complutense de Madrid, Facultad de Ciencias Económicas y Empresariales, Instituto Complutense de Análisis Económico.
  • Handle: RePEc:ucm:doicae:1422
    as

    Download full text from publisher

    File URL: https://eprints.ucm.es/id/eprint/26397/1/1422.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Blaskowitz, Oliver & Herwartz, Helmut, 2011. "On economic evaluation of directional forecasts," International Journal of Forecasting, Elsevier, vol. 27(4), pages 1058-1065, October.
    2. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    3. McCracken, Michael W., 2004. "Parameter estimation and tests of equal forecast accuracy between non-nested models," International Journal of Forecasting, Elsevier, vol. 20(3), pages 503-514.
    4. West, Kenneth D, 1996. "Asymptotic Inference about Predictive Ability," Econometrica, Econometric Society, vol. 64(5), pages 1067-1084, September.
    5. Diebold, Francis X & Mariano, Roberto S, 2002. "Comparing Predictive Accuracy," Journal of Business & Economic Statistics, American Statistical Association, vol. 20(1), pages 134-144, January.
    6. Clark, Todd E. & West, Kenneth D., 2007. "Approximately normal tests for equal predictive accuracy in nested models," Journal of Econometrics, Elsevier, vol. 138(1), pages 291-311, May.
    7. Pesaran, M. Hashem & Timmermann, Allan, 2009. "Testing Dependence Among Serially Correlated Multicategory Variables," Journal of the American Statistical Association, American Statistical Association, vol. 104(485), pages 325-337.
    8. Mc Cracken, Michael W., 2000. "Robust out-of-sample inference," Journal of Econometrics, Elsevier, vol. 99(2), pages 195-223, December.
    9. McCracken, Michael W., 2007. "Asymptotics for out of sample tests of Granger causality," Journal of Econometrics, Elsevier, vol. 140(2), pages 719-752, October.
    10. Corradi, Valentina & Swanson, Norman R. & Olivetti, Claudia, 2001. "Predictive ability with cointegrated variables," Journal of Econometrics, Elsevier, vol. 104(2), pages 315-358, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Clark, Todd & McCracken, Michael, 2013. "Advances in Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1107-1201, Elsevier.
    2. Corradi, Valentina & Swanson, Norman R., 2004. "Some recent developments in predictive accuracy testing with nested models and (generic) nonlinear alternatives," International Journal of Forecasting, Elsevier, vol. 20(2), pages 185-199.
    3. West, Kenneth D., 2006. "Forecast Evaluation," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 1, chapter 3, pages 99-134, Elsevier.
    4. Aaron J. Amburgey & Michael W. McCracken, 2023. "On the real‐time predictive content of financial condition indices for growth," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 38(2), pages 137-163, March.
    5. Brooks, Chris & Burke, Simon P. & Stanescu, Silvia, 2016. "Finite sample weighting of recursive forecast errors," International Journal of Forecasting, Elsevier, vol. 32(2), pages 458-474.
    6. Ferrara, Laurent & Marcellino, Massimiliano & Mogliani, Matteo, 2015. "Macroeconomic forecasting during the Great Recession: The return of non-linearity?," International Journal of Forecasting, Elsevier, vol. 31(3), pages 664-679.
    7. Raffaella Giacomini & Barbara Rossi, 2013. "Forecasting in macroeconomics," Chapters, in: Nigar Hashimzade & Michael A. Thornton (ed.), Handbook of Research Methods and Applications in Empirical Macroeconomics, chapter 17, pages 381-408, Edward Elgar Publishing.
    8. Clark, Todd E. & McCracken, Michael W., 2001. "Tests of equal forecast accuracy and encompassing for nested models," Journal of Econometrics, Elsevier, vol. 105(1), pages 85-110, November.
    9. Clark, Todd E. & McCracken, Michael W., 2009. "Tests of Equal Predictive Ability With Real-Time Data," Journal of Business & Economic Statistics, American Statistical Association, vol. 27(4), pages 441-454.
    10. Corradi, Valentina & Swanson, Norman R., 2002. "A consistent test for nonlinear out of sample predictive accuracy," Journal of Econometrics, Elsevier, vol. 110(2), pages 353-381, October.
    11. Todd E. Clark & Michael W. McCracken, 2010. "Testing for unconditional predictive ability," Working Papers 2010-031, Federal Reserve Bank of St. Louis.
    12. Mariano, Roberto S. & Preve, Daniel, 2012. "Statistical tests for multiple forecast comparison," Journal of Econometrics, Elsevier, vol. 169(1), pages 123-130.
    13. Kurennoy, Alexey (Куренной, Алексей), 2015. "Evaluation of the Forecasting Quality [Оценка Качества Прогнозирования]," Published Papers mak7, Russian Presidential Academy of National Economy and Public Administration.
    14. Kim, Hyun Hak & Swanson, Norman R., 2018. "Mining big data using parsimonious factor, machine learning, variable selection and shrinkage methods," International Journal of Forecasting, Elsevier, vol. 34(2), pages 339-354.
    15. Rossi, Barbara & Sekhposyan, Tatevik, 2011. "Understanding models' forecasting performance," Journal of Econometrics, Elsevier, vol. 164(1), pages 158-172, September.
    16. Rossi, Barbara, 2013. "Advances in Forecasting under Instability," Handbook of Economic Forecasting, in: G. Elliott & C. Granger & A. Timmermann (ed.), Handbook of Economic Forecasting, edition 1, volume 2, chapter 0, pages 1203-1324, Elsevier.
    17. Kenneth S. Rogoff & Vania Stavrakeva, 2008. "The Continuing Puzzle of Short Horizon Exchange Rate Forecasting," NBER Working Papers 14071, National Bureau of Economic Research, Inc.
    18. Christopher J. Neely & David E. Rapach & Jun Tu & Guofu Zhou, 2014. "Forecasting the Equity Risk Premium: The Role of Technical Indicators," Management Science, INFORMS, vol. 60(7), pages 1772-1791, July.
    19. Mayer, Walter J. & Liu, Feng & Dang, Xin, 2017. "Improving the power of the Diebold–Mariano–West test for least squares predictions," International Journal of Forecasting, Elsevier, vol. 33(3), pages 618-626.
    20. Atsushi Inoue & Lutz Kilian, 2005. "In-Sample or Out-of-Sample Tests of Predictability: Which One Should We Use?," Econometric Reviews, Taylor & Francis Journals, vol. 23(4), pages 371-402.

    More about this item

    Keywords

    Parameter uncertainty; Forecast accuracy; Discrete loss function.;
    All these keywords.

    JEL classification:

    • C12 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Hypothesis Testing: General
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ucm:doicae:1422. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Águeda González Abad (email available below). General contact details of provider: https://edirc.repec.org/data/feucmes.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.