IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1008618.html
   My bibliography  Save this article

Evaluating epidemic forecasts in an interval format

Author

Listed:
  • Johannes Bracher
  • Evan L Ray
  • Tilmann Gneiting
  • Nicholas G Reich

Abstract

For practical reasons, many forecasts of case, hospitalization, and death counts in the context of the current Coronavirus Disease 2019 (COVID-19) pandemic are issued in the form of central predictive intervals at various levels. This is also the case for the forecasts collected in the COVID-19 Forecast Hub (https://covid19forecasthub.org/). Forecast evaluation metrics like the logarithmic score, which has been applied in several infectious disease forecasting challenges, are then not available as they require full predictive distributions. This article provides an overview of how established methods for the evaluation of quantile and interval forecasts can be applied to epidemic forecasts in this format. Specifically, we discuss the computation and interpretation of the weighted interval score, which is a proper score that approximates the continuous ranked probability score. It can be interpreted as a generalization of the absolute error to probabilistic forecasts and allows for a decomposition into a measure of sharpness and penalties for over- and underprediction.Author summary: During the COVID-19 pandemic, model-based probabilistic forecasts of case, hospitalization, and death numbers can help to improve situational awareness and guide public health interventions. The COVID-19 Forecast Hub (https://covid19forecasthub.org/) collects such forecasts from numerous national and international groups. Systematic and statistically sound evaluation of forecasts is an important prerequisite to revise and improve models and to combine different forecasts into ensemble predictions. We provide an intuitive introduction to scoring methods, which are suitable for the interval/quantile-based format used in the Forecast Hub, and compare them to other commonly used performance measures.

Suggested Citation

  • Johannes Bracher & Evan L Ray & Tilmann Gneiting & Nicholas G Reich, 2021. "Evaluating epidemic forecasts in an interval format," PLOS Computational Biology, Public Library of Science, vol. 17(2), pages 1-15, February.
  • Handle: RePEc:plo:pcbi00:1008618
    DOI: 10.1371/journal.pcbi.1008618
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008618
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1008618&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1008618?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Gneiting, Tilmann & Raftery, Adrian E., 2007. "Strictly Proper Scoring Rules, Prediction, and Estimation," Journal of the American Statistical Association, American Statistical Association, vol. 102, pages 359-378, March.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Coroneo, Laura & Iacone, Fabrizio & Paccagnini, Alessia & Santos Monteiro, Paulo, 2023. "Testing the predictive accuracy of COVID-19 forecasts," International Journal of Forecasting, Elsevier, vol. 39(2), pages 606-622.
    2. Zhichao Li, 2022. "Forecasting Weekly Dengue Cases by Integrating Google Earth Engine-Based Risk Predictor Generation and Google Colab-Based Deep Learning Modeling in Fortaleza and the Federal District, Brazil," IJERPH, MDPI, vol. 19(20), pages 1-16, October.
    3. Ray, Evan L. & Brooks, Logan C. & Bien, Jacob & Biggerstaff, Matthew & Bosse, Nikos I. & Bracher, Johannes & Cramer, Estee Y. & Funk, Sebastian & Gerding, Aaron & Johansson, Michael A. & Rumack, Aaron, 2023. "Comparing trained and untrained probabilistic ensemble forecasts of COVID-19 cases and deaths in the United States," International Journal of Forecasting, Elsevier, vol. 39(3), pages 1366-1383.
    4. Luis A. Barboza & Shu Wei Chou Chen & Marcela Alfaro Córdoba & Eric J. Alfaro & Hugo G. Hidalgo, 2023. "Spatio‐temporal downscaling emulator for regional climate models," Environmetrics, John Wiley & Sons, Ltd., vol. 34(7), November.
    5. Wang, Xiaoqian & Hyndman, Rob J. & Li, Feng & Kang, Yanfei, 2023. "Forecast combinations: An over 50-year review," International Journal of Forecasting, Elsevier, vol. 39(4), pages 1518-1547.
    6. Fabian Kruger & Hendrik Plett, 2022. "Prediction intervals for economic fixed-event forecasts," Papers 2210.13562, arXiv.org, revised Mar 2024.
    7. Kathryn S Taylor & James W Taylor, 2022. "Interval forecasts of weekly incident and cumulative COVID-19 mortality in the United States: A comparison of combining methods," PLOS ONE, Public Library of Science, vol. 17(3), pages 1-25, March.
    8. Chiang, Wen-Hao & Liu, Xueying & Mohler, George, 2022. "Hawkes process modeling of COVID-19 with mobility leading indicators and spatial covariates," International Journal of Forecasting, Elsevier, vol. 38(2), pages 505-520.
    9. Taylor, James W. & Taylor, Kathryn S., 2023. "Combining probabilistic forecasts of COVID-19 mortality in the United States," European Journal of Operational Research, Elsevier, vol. 304(1), pages 25-41.
    10. Jurgen A. Doornik & Jennifer L. Castle & David F. Hendry, 2021. "Modeling and forecasting the COVID‐19 pandemic time‐series data," Social Science Quarterly, Southwestern Social Science Association, vol. 102(5), pages 2070-2087, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Azar, Pablo D. & Micali, Silvio, 2018. "Computational principal agent problems," Theoretical Economics, Econometric Society, vol. 13(2), May.
    2. Angelica Gianfreda & Francesco Ravazzolo & Luca Rossini, 2023. "Large Time‐Varying Volatility Models for Hourly Electricity Prices," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 85(3), pages 545-573, June.
    3. Davide Pettenuzzo & Francesco Ravazzolo, 2016. "Optimal Portfolio Choice Under Decision‐Based Model Combinations," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 31(7), pages 1312-1332, November.
    4. Rubio, F.J. & Steel, M.F.J., 2011. "Inference for grouped data with a truncated skew-Laplace distribution," Computational Statistics & Data Analysis, Elsevier, vol. 55(12), pages 3218-3231, December.
    5. Hwang, Eunju, 2022. "Prediction intervals of the COVID-19 cases by HAR models with growth rates and vaccination rates in top eight affected countries: Bootstrap improvement," Chaos, Solitons & Fractals, Elsevier, vol. 155(C).
    6. R de Fondeville & A C Davison, 2018. "High-dimensional peaks-over-threshold inference," Biometrika, Biometrika Trust, vol. 105(3), pages 575-592.
    7. Armantier, Olivier & Treich, Nicolas, 2013. "Eliciting beliefs: Proper scoring rules, incentives, stakes and hedging," European Economic Review, Elsevier, vol. 62(C), pages 17-40.
    8. Domenico Piccolo & Rosaria Simone, 2019. "The class of cub models: statistical foundations, inferential issues and empirical evidence," Statistical Methods & Applications, Springer;Società Italiana di Statistica, vol. 28(3), pages 389-435, September.
    9. Finn Lindgren, 2015. "Comments on: Comparing and selecting spatial predictors using local criteria," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 24(1), pages 35-44, March.
    10. Laura Liu & Hyungsik Roger Moon & Frank Schorfheide, 2023. "Forecasting with a panel Tobit model," Quantitative Economics, Econometric Society, vol. 14(1), pages 117-159, January.
    11. Warne, Anders, 2023. "DSGE model forecasting: rational expectations vs. adaptive learning," Working Paper Series 2768, European Central Bank.
    12. James Mitchell & Aubrey Poon & Dan Zhu, 2022. "Constructing Density Forecasts from Quantile Regressions: Multimodality in Macro-Financial Dynamics," Working Papers 22-12R, Federal Reserve Bank of Cleveland, revised 11 Apr 2023.
    13. Rafael Frongillo, 2022. "Quantum Information Elicitation," Papers 2203.07469, arXiv.org.
    14. Karimi, Majid & Zaerpour, Nima, 2022. "Put your money where your forecast is: Supply chain collaborative forecasting with cost-function-based prediction markets," European Journal of Operational Research, Elsevier, vol. 300(3), pages 1035-1049.
    15. Peysakhovich, Alexander & Plagborg-Møller, Mikkel, 2012. "A note on proper scoring rules and risk aversion," Economics Letters, Elsevier, vol. 117(1), pages 357-361.
    16. Ranadeep Daw & Christopher K. Wikle, 2023. "REDS: Random ensemble deep spatial prediction," Environmetrics, John Wiley & Sons, Ltd., vol. 34(1), February.
    17. Merkle, Edgar C. & Steyvers, Mark & Mellers, Barbara & Tetlock, Philip E., 2017. "A neglected dimension of good forecasting judgment: The questions we choose also matter," International Journal of Forecasting, Elsevier, vol. 33(4), pages 817-832.
    18. Remy Elbez & Jeff Folz & Alan McLean & Hernan Roca & Joseph M Labuz & Kenneth J Pienta & Shuichi Takayama & Raoul Kopelman, 2021. "Cell-morphodynamic phenotype classification with application to cancer metastasis using cell magnetorotation and machine-learning," PLOS ONE, Public Library of Science, vol. 16(11), pages 1-14, November.
    19. Angelica Gianfreda & Francesco Ravazzolo & Luca Rossini, 2020. "Large Time-Varying Volatility Models for Electricity Prices," Working Papers No 05/2020, Centre for Applied Macro- and Petroleum economics (CAMP), BI Norwegian Business School.
    20. Yuanchao Emily Bo & David V. Budescu & Charles Lewis & Philip E. Tetlock & Barbara Mellers, 2017. "An IRT forecasting model: linking proper scoring rules to item response theory," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 12(2), pages 90-103, March.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1008618. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.