IDEAS home Printed from https://ideas.repec.org/p/oxf/wpaper/538.html
   My bibliography  Save this paper

On Not Evaluating Economic Models by Forecast Outcomes

Author

Listed:
  • Jennifer Castle
  • David Hendry

Abstract

Even in scientific disciplines, forecast failures occur. Four possible states of nature (a model is good or bad, and it forecasts well or badly) are examined using a forecast-error taxonomy, which traces the many possible sources of forecast errors. This analysis shows that a valid model can forecast badly, and a poor model can forecast successfully. Delineating the main causes of forecast failure reveals transformations that can correct failure without altering the 'quality' of the model in use. We conclude that judging a model by the accuracy of its forecasts is more like fools' gold than a gold standard.

Suggested Citation

  • Jennifer Castle & David Hendry, 2011. "On Not Evaluating Economic Models by Forecast Outcomes," Economics Series Working Papers 538, University of Oxford, Department of Economics.
  • Handle: RePEc:oxf:wpaper:538
    as

    Download full text from publisher

    File URL: https://ora.ox.ac.uk/objects/uuid:3c5a83c4-ff1f-4f6e-823e-e1fcffe4560b
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jennifer Castle & Takamitsu Kurita, 2019. "Modelling and forecasting the dollar-pound exchange rate in the presence of structural breaks," Economics Series Working Papers 866, University of Oxford, Department of Economics.
    2. Jennifer Castle & David Hendry, 2016. "Policy Analysis, Forediction, and Forecast Failure," Economics Series Working Papers 809, University of Oxford, Department of Economics.
    3. Jennifer L. Castle & David F. Hendry & Andrew B. Martinez, 2017. "Evaluating Forecasts, Narratives and Policy Using a Test of Invariance," Econometrics, MDPI, vol. 5(3), pages 1-27, September.

    More about this item

    Keywords

    Model evaluation; Forecast failure; Model selection;
    All these keywords.

    JEL classification:

    • C18 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Methodolical Issues: General
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:oxf:wpaper:538. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Anne Pouliquen (email available below). General contact details of provider: https://edirc.repec.org/data/sfeixuk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.