IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0323064.html
   My bibliography  Save this article

Aggregating soft labels from crowd annotations improves uncertainty estimation under distribution shift

Author

Listed:
  • Dustin Wright
  • Isabelle Augenstein

Abstract

Selecting an effective training signal for machine learning tasks is difficult: expert annotations are expensive, and crowd-sourced annotations may not be reliable. Recent work has demonstrated that learning from a distribution over labels acquired from crowd annotations can be effective both for performance and uncertainty estimation. However, this has mainly been studied using a limited set of soft-labeling methods in an in-domain setting. Additionally, no one method has been shown to consistently perform well across tasks, making it difficult to know a priori which to choose. To fill these gaps, this paper provides the first large-scale empirical study on learning from crowd labels in the out-of-domain setting, systematically analyzing 8 soft-labeling methods on 4 language and vision tasks. Additionally, we propose to aggregate soft-labels via a simple average in order to achieve consistent performance across tasks. We demonstrate that this yields classifiers with improved predictive uncertainty estimation in most settings while maintaining consistent raw performance compared to learning from individual soft-labeling methods or taking a majority vote of the annotations. We additionally highlight that in regimes with abundant or minimal training data, the selection of soft labeling method is less important, while for highly subjective labels and moderate amounts of training data, aggregation yields significant improvements in uncertainty estimation over individual methods. Code can be found at https://github.com/copenlu/aggregating-crowd-annotations-ood

Suggested Citation

  • Dustin Wright & Isabelle Augenstein, 2025. "Aggregating soft labels from crowd annotations improves uncertainty estimation under distribution shift," PLOS ONE, Public Library of Science, vol. 20(6), pages 1-28, June.
  • Handle: RePEc:plo:pone00:0323064
    DOI: 10.1371/journal.pone.0323064
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0323064
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0323064&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0323064?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. A. P. Dawid & A. M. Skene, 1979. "Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 28(1), pages 20-28, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xiaoxiao Yang & Jing Zhang & Jun Peng & Lihong Lei, 2021. "Incentive mechanism based on Stackelberg game under reputation constraint for mobile crowdsensing," International Journal of Distributed Sensor Networks, , vol. 17(6), pages 15501477211, June.
    2. Junming Yin & Jerry Luo & Susan A. Brown, 2021. "Learning from Crowdsourced Multi-labeling: A Variational Bayesian Approach," Information Systems Research, INFORMS, vol. 32(3), pages 752-773, September.
    3. Yuqing Kong, 2021. "Information Elicitation Meets Clustering," Papers 2110.00952, arXiv.org.
    4. Tomer Geva & Maytal Saar‐Tsechansky, 2021. "Who Is a Better Decision Maker? Data‐Driven Expert Ranking Under Unobserved Quality," Production and Operations Management, Production and Operations Management Society, vol. 30(1), pages 127-144, January.
    5. Jesus Cerquides & Mehmet Oğuz Mülâyim & Jerónimo Hernández-González & Amudha Ravi Shankar & Jose Luis Fernandez-Marquez, 2021. "A Conceptual Probabilistic Framework for Annotation Aggregation of Citizen Science Data," Mathematics, MDPI, vol. 9(8), pages 1-15, April.
    6. Ahfock, Daniel & McLachlan, Geoffrey J., 2021. "Harmless label noise and informative soft-labels in supervised classification," Computational Statistics & Data Analysis, Elsevier, vol. 161(C).
    7. Xiu Fang & Suxin Si & Guohao Sun & Quan Z. Sheng & Wenjun Wu & Kang Wang & Hang Lv, 2022. "Selecting Workers Wisely for Crowdsourcing When Copiers and Domain Experts Co-exist," Future Internet, MDPI, vol. 14(2), pages 1-22, January.
    8. Alaa Ghanaiem & Evgeny Kagan & Parteek Kumar & Tal Raviv & Peter Glynn & Irad Ben-Gal, 2023. "Unsupervised Classification under Uncertainty: The Distance-Based Algorithm," Mathematics, MDPI, vol. 11(23), pages 1-19, November.
    9. Jing Wang & Panagiotis G. Ipeirotis & Foster Provost, 2017. "Cost-Effective Quality Assurance in Crowd Labeling," Information Systems Research, INFORMS, vol. 28(1), pages 137-158, March.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0323064. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.