IDEAS home Printed from https://ideas.repec.org/p/osf/metaar/rxmh7.html
   My bibliography  Save this paper

Mathematically aggregating experts' predictions of possible futures

Author

Listed:
  • Hanea, Anca

    (University of Melbourne)

  • Wilkinson, David Peter
  • McBride, Marissa
  • Lyon, Aidan
  • van Ravenzwaaij, Don

    (University of Groningen)

  • Singleton Thorn, Felix

    (University of Melbourne)

  • Gray, Charles T.
  • Mandel, David R.
  • Willcox, Aaron

    (Melbourne University)

  • Gould, Elliot

Abstract

Experts are often asked to represent their uncertainty as a subjective probability. Structured protocols offer a transparent and systematic way to elicit and combine probability judgements from multiple experts. As part of this process, experts are asked to individually estimate a probability (e.g., of a future event) which needs to be combined/aggregated into a final group prediction. The experts' judgements can be aggregated behaviourally (by striving for consensus), or mathematically (by using a mathematical rule to combine individual estimates). Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. However, the choice of a rule is not straightforward, and the aggregated group probability judgement's quality depends on it. The quality of an aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the "best" final prediction. In the ideal case, individual experts' performance (as probability assessors) is scored, these scores are translated into performance-based weights, and a performance-based weighted aggregation is used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. We use several data sets to investigate the relative performance of multiple aggregation methods informed by previous experience and the available literature. Even though the accuracy, calibration, and informativeness of the majority of methods are very similar, two of the aggregation methods distinguish themselves as the best and worst.

Suggested Citation

  • Hanea, Anca & Wilkinson, David Peter & McBride, Marissa & Lyon, Aidan & van Ravenzwaaij, Don & Singleton Thorn, Felix & Gray, Charles T. & Mandel, David R. & Willcox, Aaron & Gould, Elliot, 2021. "Mathematically aggregating experts' predictions of possible futures," MetaArXiv rxmh7, Center for Open Science.
  • Handle: RePEc:osf:metaar:rxmh7
    DOI: 10.31219/osf.io/rxmh7
    as

    Download full text from publisher

    File URL: https://osf.io/download/6034ba3434d30404cee44fc2/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/rxmh7?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Robert T. Clemen & Robert L. Winkler, 1999. "Combining Probability Distributions From Experts in Risk Analysis," Risk Analysis, John Wiley & Sons, vol. 19(2), pages 187-203, April.
    2. repec:cup:judgdm:v:15:y:2020:i:5:p:783-797 is not listed on IDEAS
    3. Willy Aspinall, 2010. "A route to more tractable expert advice," Nature, Nature, vol. 463(7279), pages 294-295, January.
    4. Lyon, Aidan & Wintle, Bonnie C. & Burgman, Mark, 2015. "Collective wisdom: Methods of confidence interval aggregation," Journal of Business Research, Elsevier, vol. 68(8), pages 1759-1767.
    5. Yaniv, Ilan, 1997. "Weighting and Trimming: Heuristics for Aggregating Judgments under Uncertainty," Organizational Behavior and Human Decision Processes, Elsevier, vol. 69(3), pages 237-249, March.
    6. Satopää, Ville A. & Baron, Jonathan & Foster, Dean P. & Mellers, Barbara A. & Tetlock, Philip E. & Ungar, Lyle H., 2014. "Combining multiple probability predictions using a simple logit model," International Journal of Forecasting, Elsevier, vol. 30(2), pages 344-356.
    7. Robert L. Winkler & Yael Grushka-Cockayne & Kenneth C. Lichtendahl Jr. & Victor Richmond R. Jose, 2019. "Probability Forecasts and Their Combination: A Research Perspective," Decision Analysis, INFORMS, vol. 16(4), pages 239-260, December.
    8. McKenzie, Craig R.M. & Liersch, Michael J. & Yaniv, Ilan, 2008. "Overconfidence in interval estimates: What does expertise buy you?," Organizational Behavior and Human Decision Processes, Elsevier, vol. 107(2), pages 179-191, November.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Alipourfard, Nazanin & Arendt, Beatrix & Benjamin, Daniel Jacob & Benkler, Noam & Bishop, Michael Metcalf & Burstein, Mark & Bush, Martin & Caverlee, James & Chen, Yiling & Clark, Chae, 2021. "Systematizing Confidence in Open Research and Evidence (SCORE)," SocArXiv 46mnb, Center for Open Science.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hanea, A.M. & McBride, M.F. & Burgman, M.A. & Wintle, B.C. & Fidler, F. & Flander, L. & Twardy, C.R. & Manning, B. & Mascaro, S., 2017. "I nvestigate D iscuss E stimate A ggregate for structured expert judgement," International Journal of Forecasting, Elsevier, vol. 33(1), pages 267-279.
    2. Satopää, Ville A. & Salikhov, Marat & Tetlock, Philip E. & Mellers, Barbara, 2023. "Decomposing the effects of crowd-wisdom aggregators: The bias–information–noise (BIN) model," International Journal of Forecasting, Elsevier, vol. 39(1), pages 470-485.
    3. Patrick Afflerbach & Christopher Dun & Henner Gimpel & Dominik Parak & Johannes Seyfried, 2021. "A Simulation-Based Approach to Understanding the Wisdom of Crowds Phenomenon in Aggregating Expert Judgment," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 63(4), pages 329-348, August.
    4. Anca M. Hanea & Marissa F. McBride & Mark A. Burgman & Bonnie C. Wintle, 2018. "The Value of Performance Weights and Discussion in Aggregated Expert Judgments," Risk Analysis, John Wiley & Sons, vol. 38(9), pages 1781-1794, September.
    5. Eric Libby & Leon Glass, 2010. "The Calculus of Committee Composition," PLOS ONE, Public Library of Science, vol. 5(9), pages 1-8, September.
    6. Ville A. Satopää & Marat Salikhov & Philip E. Tetlock & Barbara Mellers, 2021. "Bias, Information, Noise: The BIN Model of Forecasting," Management Science, INFORMS, vol. 67(12), pages 7599-7618, December.
    7. Julia R. Falconer & Eibe Frank & Devon L. L. Polaschek & Chaitanya Joshi, 2022. "Methods for Eliciting Informative Prior Distributions: A Critical Review," Decision Analysis, INFORMS, vol. 19(3), pages 189-204, September.
    8. Sulian Wang & Chen Wang, 2021. "Quantile Judgments of Lognormal Losses: An Experimental Investigation," Decision Analysis, INFORMS, vol. 18(1), pages 78-99, March.
    9. David V. Budescu & Eva Chen, 2015. "Identifying Expertise to Extract the Wisdom of Crowds," Management Science, INFORMS, vol. 61(2), pages 267-280, February.
    10. Fergus Bolger & Gene Rowe, 2015. "The Aggregation of Expert Judgment: Do Good Things Come to Those Who Weight?," Risk Analysis, John Wiley & Sons, vol. 35(1), pages 5-11, January.
    11. von der Gracht, Heiko A. & Hommel, Ulrich & Prokesch, Tobias & Wohlenberg, Holger, 2016. "Testing weighting approaches for forecasting in a Group Wisdom Support System environment," Journal of Business Research, Elsevier, vol. 69(10), pages 4081-4094.
    12. Brian H. MacGillivray, 2019. "Null Hypothesis Testing ≠ Scientific Inference: A Critique of the Shaky Premise at the Heart of the Science and Values Debate, and a Defense of Value‐Neutral Risk Assessment," Risk Analysis, John Wiley & Sons, vol. 39(7), pages 1520-1532, July.
    13. Meissner, Philip & Brands, Christian & Wulf, Torsten, 2017. "Quantifiying blind spots and weak signals in executive judgment: A structured integration of expert judgment into the scenario development process," International Journal of Forecasting, Elsevier, vol. 33(1), pages 244-253.
    14. Kenneth Gillingham & William D. Nordhaus & David Anthoff & Geoffrey Blanford & Valentina Bosetti & Peter Christensen & Haewon McJeon & John Reilly & Paul Sztorc, 2015. "Modeling Uncertainty in Climate Change: A Multi-Model Comparison," NBER Working Papers 21637, National Bureau of Economic Research, Inc.
    15. Avner Engel & Shalom Shachar, 2006. "Measuring and optimizing systems' quality costs and project duration," Systems Engineering, John Wiley & Sons, vol. 9(3), pages 259-280, September.
    16. Atanasov, Pavel & Witkowski, Jens & Ungar, Lyle & Mellers, Barbara & Tetlock, Philip, 2020. "Small steps to accuracy: Incremental belief updaters are better forecasters," Organizational Behavior and Human Decision Processes, Elsevier, vol. 160(C), pages 19-35.
    17. repec:cup:judgdm:v:14:y:2019:i:4:p:395-411 is not listed on IDEAS
    18. Jonathan Baron & Barbara A. Mellers & Philip E. Tetlock & Eric Stone & Lyle H. Ungar, 2014. "Two Reasons to Make Aggregated Probability Forecasts More Extreme," Decision Analysis, INFORMS, vol. 11(2), pages 133-145, June.
    19. Stapleton, L.M. & Hanna, P. & Ravenscroft, N. & Church, A., 2014. "A flexible ecosystem services proto-typology based on public opinion," Ecological Economics, Elsevier, vol. 106(C), pages 83-90.
    20. Franz Dietrich & Christian List, 2017. "Probabilistic opinion pooling generalized. Part one: general agendas," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 48(4), pages 747-786, April.
    21. repec:cup:judgdm:v:13:y:2018:i:6:p:607-621 is not listed on IDEAS
    22. Alison Wood Brooks & Francesca Gino & Maurice E. Schweitzer, 2015. "Smart People Ask for (My) Advice: Seeking Advice Boosts Perceptions of Competence," Management Science, INFORMS, vol. 61(6), pages 1421-1435, June.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:metaar:rxmh7. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/metaarxiv .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.