IDEAS home Printed from https://ideas.repec.org/a/inm/orisre/v32y2021i3p752-773.html
   My bibliography  Save this article

Learning from Crowdsourced Multi-labeling: A Variational Bayesian Approach

Author

Listed:
  • Junming Yin

    (Department of Management Information Systems, University of Arizona, Tucson, Arizona 85721; Tepper School of Business, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213)

  • Jerry Luo

    (Department of Mathematics, University of Arizona, Tucson, Arizona 85721)

  • Susan A. Brown

    (Department of Management Information Systems, University of Arizona, Tucson, Arizona 85721)

Abstract

Microtask crowdsourcing has emerged as a cost-effective approach for obtaining large-scale labeled data. Crowdsourcing platforms, such as MTurk, provide an online marketplace where task requesters can submit a batch of microtasks for a crowd of workers to complete for a small monetary compensation. As the information collected from a crowd can be prone to errors, additional algorithmic techniques are needed to infer the ground truth labels from noisy annotations by workers with heterogeneous quality. Moreover, it would be very beneficial to identify and possibly filter out low-quality workers to foster the creation of a healthy and sustainable crowdsourcing ecosystem. Much of the existing literature on crowd labeling has focused on the single-label setting. However, in many application domains, it is common that each item to be annotated can be assigned to multiple categories simultaneously. In this paper, we present a variety of new approaches for modeling label dependency and worker quality in the context of multi-label crowdsourcing. To capture label dependency, we introduce three methods based on a Bayesian mixture of Bernoulli distributions, its Dirichlet process extension, and a multivariate logit-normal distribution. We also propose two distinct generative models for characterizing shared and hierarchical structures of worker quality. Efficient collapsed and Laplace variational inference algorithms are then developed to jointly infer ground truth labels and worker quality. Extensive simulation and MTurk experiments show that the models based on integrating Bernoulli mixtures and shared structure of worker quality achieve a significant improvement over other state-of-the-art methods. Our study clearly highlights that joint and effective modeling of label dependency and worker quality is crucial to the design of a multi-label crowdsourcing system. The proposed framework also has great potential to be extended to a broader range of applications, in which different opinions need to be combined to measure multiple perspectives of an object.

Suggested Citation

  • Junming Yin & Jerry Luo & Susan A. Brown, 2021. "Learning from Crowdsourced Multi-labeling: A Variational Bayesian Approach," Information Systems Research, INFORMS, vol. 32(3), pages 752-773, September.
  • Handle: RePEc:inm:orisre:v:32:y:2021:i:3:p:752-773
    DOI: 10.1287/isre.2021.1000
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/isre.2021.1000
    Download Restriction: no

    File URL: https://libkey.io/10.1287/isre.2021.1000?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. A. P. Dawid & A. M. Skene, 1979. "Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 28(1), pages 20-28, March.
    2. David M. Blei & Alp Kucukelbir & Jon D. McAuliffe, 2017. "Variational Inference: A Review for Statisticians," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(518), pages 859-877, April.
    3. Grigorios Tsoumakas & Ioannis Katakis, 2007. "Multi-Label Classification: An Overview," International Journal of Data Warehousing and Mining (IJDWM), IGI Global, vol. 3(3), pages 1-13, July.
    4. Guest Editors: Hemant Jain & Balaji Padmanabhan & Paul A. Pavlou & Raghu T. Santanam, 2018. "all for Papers—Special Issue of Information Systems Research —Humans, Algorithms, and Augmented Intelligence: The Future of Work, Organizations, and Society," Information Systems Research, INFORMS, vol. 29(1), pages 250-251, March.
    5. David R. Karger & Sewoong Oh & Devavrat Shah, 2014. "Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems," Operations Research, INFORMS, vol. 62(1), pages 1-24, February.
    6. Michael Luca & Georgios Zervas, 2016. "Fake It Till You Make It: Reputation, Competition, and Yelp Review Fraud," Management Science, INFORMS, vol. 62(12), pages 3412-3427, December.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hemant Jain & Balaji Padmanabhan & Paul A. Pavlou & T. S. Raghu, 2021. "Editorial for the Special Section on Humans, Algorithms, and Augmented Intelligence: The Future of Work, Organizations, and Society," Information Systems Research, INFORMS, vol. 32(3), pages 675-687, September.
    2. Ruyi Ge & Zhiqiang (Eric) Zheng & Xuan Tian & Li Liao, 2021. "Human–Robot Interaction: When Investors Adjust the Usage of Robo-Advisors in Peer-to-Peer Lending," Information Systems Research, INFORMS, vol. 32(3), pages 774-785, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Tomer Geva & Maytal Saar‐Tsechansky, 2021. "Who Is a Better Decision Maker? Data‐Driven Expert Ranking Under Unobserved Quality," Production and Operations Management, Production and Operations Management Society, vol. 30(1), pages 127-144, January.
    2. Marios Kokkodis, 2021. "Dynamic, Multidimensional, and Skillset-Specific Reputation Systems for Online Work," Information Systems Research, INFORMS, vol. 32(3), pages 688-712, September.
    3. M. Narciso, 2022. "The Unreliability of Online Review Mechanisms," Journal of Consumer Policy, Springer, vol. 45(3), pages 349-368, September.
    4. Shen Liu & Hongyan Liu, 2021. "Tagging Items Automatically Based on Both Content Information and Browsing Behaviors," INFORMS Journal on Computing, INFORMS, vol. 33(3), pages 882-897, July.
    5. Gary Bolton & Kevin Breuer & Ben Greiner & Axel Ockenfels, 2023. "Fixing feedback revision rules in online markets," Journal of Economics & Management Strategy, Wiley Blackwell, vol. 32(2), pages 247-256, April.
    6. Loaiza-Maya, Rubén & Smith, Michael Stanley & Nott, David J. & Danaher, Peter J., 2022. "Fast and accurate variational inference for models with many latent variables," Journal of Econometrics, Elsevier, vol. 230(2), pages 339-362.
    7. Xing Qin & Shuangge Ma & Mengyun Wu, 2023. "Two‐level Bayesian interaction analysis for survival data incorporating pathway information," Biometrics, The International Biometric Society, vol. 79(3), pages 1761-1774, September.
    8. Youngseon Lee & Seongil Jo & Jaeyong Lee, 2022. "A variational inference for the Lévy adaptive regression with multiple kernels," Computational Statistics, Springer, vol. 37(5), pages 2493-2515, November.
    9. Nathaniel Tomasetti & Catherine Forbes & Anastasios Panagiotelis, 2019. "Updating Variational Bayes: Fast Sequential Posterior Inference," Monash Econometrics and Business Statistics Working Papers 13/19, Monash University, Department of Econometrics and Business Statistics.
    10. Gael M. Martin & David T. Frazier & Christian P. Robert, 2020. "Computing Bayes: Bayesian Computation from 1763 to the 21st Century," Monash Econometrics and Business Statistics Working Papers 14/20, Monash University, Department of Econometrics and Business Statistics.
    11. Dara Lee Luca & Michael Luca, 2019. "Survival of the Fittest: The Impact of the Minimum Wage on Firm Exit," NBER Working Papers 25806, National Bureau of Economic Research, Inc.
    12. Sungsik Park & Woochoel Shin & Jinhong Xie, 2021. "The Fateful First Consumer Review," Marketing Science, INFORMS, vol. 40(3), pages 481-507, May.
    13. Djohan Bonnet & Tifenn Hirtzlin & Atreya Majumdar & Thomas Dalgaty & Eduardo Esmanhotto & Valentina Meli & Niccolo Castellani & Simon Martin & Jean-François Nodin & Guillaume Bourgeois & Jean-Michel P, 2023. "Bringing uncertainty quantification to the extreme-edge with memristor-based Bayesian neural networks," Nature Communications, Nature, vol. 14(1), pages 1-13, December.
    14. Lingfang (Ivy) Li & Steven Tadelis & Xiaolan Zhou, 2020. "Buying reputation as a signal of quality: Evidence from an online marketplace," RAND Journal of Economics, RAND Corporation, vol. 51(4), pages 965-988, December.
    15. Ho, Paul, 2023. "Global robust Bayesian analysis in large models," Journal of Econometrics, Elsevier, vol. 235(2), pages 608-642.
    16. Liang, Xinbin & Liu, Zhuoxuan & Wang, Jie & Jin, Xinqiao & Du, Zhimin, 2023. "Uncertainty quantification-based robust deep learning for building energy systems considering distribution shift problem," Applied Energy, Elsevier, vol. 337(C).
    17. Plé, Loïc & Demangeot, Catherine, 2020. "Social contagion of online and offline deviant behaviors and its value outcomes: The case of tourism ecosystems," Journal of Business Research, Elsevier, vol. 117(C), pages 886-896.
    18. Gael M. Martin & David T. Frazier & Ruben Loaiza-Maya & Florian Huber & Gary Koop & John Maheu & Didier Nibbering & Anastasios Panagiotelis, 2023. "Bayesian Forecasting in the 21st Century: A Modern Review," Monash Econometrics and Business Statistics Working Papers 1/23, Monash University, Department of Econometrics and Business Statistics.
    19. Xiaoxiao Yang & Jing Zhang & Jun Peng & Lihong Lei, 2021. "Incentive mechanism based on Stackelberg game under reputation constraint for mobile crowdsensing," International Journal of Distributed Sensor Networks, , vol. 17(6), pages 15501477211, June.
    20. Seokhyun Chung & Raed Al Kontar & Zhenke Wu, 2022. "Weakly Supervised Multi-output Regression via Correlated Gaussian Processes," INFORMS Joural on Data Science, INFORMS, vol. 1(2), pages 115-137, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:orisre:v:32:y:2021:i:3:p:752-773. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.