IDEAS home Printed from https://ideas.repec.org/a/wly/soecon/v87y2020i1p369-385.html
   My bibliography  Save this article

Amazon Mechanical Turk workers can provide consistent and economically meaningful data

Author

Listed:
  • David Johnson
  • John Barry Ryan

Abstract

Amazon Mechanical Turk (AMT) is an online labor market that is being used increasingly often in the social sciences. This occurs despite significant questions regarding efficacy of the platform. In this article, we attempt to address some of these questions by exploring the consistency of the characteristics of individuals who participate in studies posted on AMT. The primary individuals analyzed in this study are subjects who participated in at least two of eleven experiments that were run on AMT between September of 2012 and January of 2018. We demonstrate subjects consistently report their age, gender, subjective willingness to take risk, and impulsiveness. Further, subjective willingness to take risk is found to be significantly correlated with decisions made in a simple lottery experiment with real stakes—even when the subjective risk measure is reported months, sometimes years, in the past. This suggests the quality of data obtained via AMT is not terribly harmed by the lack of control and low stakes.

Suggested Citation

  • David Johnson & John Barry Ryan, 2020. "Amazon Mechanical Turk workers can provide consistent and economically meaningful data," Southern Economic Journal, John Wiley & Sons, vol. 87(1), pages 369-385, July.
  • Handle: RePEc:wly:soecon:v:87:y:2020:i:1:p:369-385
    DOI: 10.1002/soej.12451
    as

    Download full text from publisher

    File URL: https://doi.org/10.1002/soej.12451
    Download Restriction: no

    File URL: https://libkey.io/10.1002/soej.12451?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Mullinix, Kevin J. & Leeper, Thomas J. & Druckman, James N. & Freese, Jeremy, 2015. "The Generalizability of Survey Experiments," Journal of Experimental Political Science, Cambridge University Press, vol. 2(2), pages 109-138, January.
    2. John Gibson & David Johnson, 2019. "Are Online Samples Credible? Evidence from Risk Elicitation Tests," Atlantic Economic Journal, Springer;International Atlantic Economic Society, vol. 47(3), pages 377-379, September.
    3. John Horton & David Rand & Richard Zeckhauser, 2011. "The online laboratory: conducting experiments in a real labor market," Experimental Economics, Springer;Economic Science Association, vol. 14(3), pages 399-425, September.
    4. Charness, Gary & Viceisza, Angelino, 2016. "Three Risk-elicitation Methods in the Field - Evidence from Rural Senegal," Review of Behavioral Economics, now publishers, vol. 3(2), pages 145-171, July.
    5. Giuseppe Attanasi & Nikolaos Georgantzís & Valentina Rotondi & Daria Vigani, 2018. "Lottery- and survey-based risk attitudes linked through a multichoice elicitation task," Theory and Decision, Springer, vol. 84(3), pages 341-372, May.
    6. Paolo Crosetto & Antonio Filippin, 2016. "A theoretical and experimental appraisal of four risk elicitation methods," Experimental Economics, Springer;Economic Science Association, vol. 19(3), pages 613-641, September.
    7. Thomas Dohmen & Armin Falk & David Huffman & Uwe Sunde & Jürgen Schupp & Gert G. Wagner, 2011. "Individual Risk Attitudes: Measurement, Determinants, And Behavioral Consequences," Journal of the European Economic Association, European Economic Association, vol. 9(3), pages 522-550, June.
    8. repec:cup:judgdm:v:7:y:2012:i:6:p:716-727 is not listed on IDEAS
    9. Catherine C. Eckel, 2019. "Measuring individual risk preferences," IZA World of Labor, Institute of Labor Economics (IZA), pages 454-454, June.
    10. Benjamin Beranek & Robin Cubitt & Simon Gächter, 2015. "Stated and revealed inequality aversion in three subject pools," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 1(1), pages 43-58, July.
    11. repec:cup:judgdm:v:5:y:2010:i:5:p:411-419 is not listed on IDEAS
    12. Armin Falk & Anke Becker & Thomas Dohmen & David Huffman & Uwe Sunde, 2023. "The Preference Survey Module: A Validated Instrument for Measuring Risk, Time, and Social Preferences," Management Science, INFORMS, vol. 69(4), pages 1935-1950, April.
    13. David B. Johnson & Matthew D. Webb, 2016. "Decision Making with Risky, Rival Outcomes: Theory and Evidence," Carleton Economic Papers 16-12, Carleton University, Department of Economics.
    14. Peter P J L Verkoeijen & Samantha Bouwmeester, 2014. "Does Intuition Cause Cooperation?," PLOS ONE, Public Library of Science, vol. 9(5), pages 1-8, May.
    15. Lönnqvist, Jan-Erik & Verkasalo, Markku & Walkowitz, Gari & Wichardt, Philipp C., 2015. "Measuring individual risk attitudes in the lab: Task or ask? An empirical comparison," Journal of Economic Behavior & Organization, Elsevier, vol. 119(C), pages 254-266.
    16. Andreas Pedroni & Renato Frey & Adrian Bruhin & Gilles Dutilh & Ralph Hertwig & Jörg Rieskamp, 2017. "The risk elicitation puzzle," Nature Human Behaviour, Nature, vol. 1(11), pages 803-809, November.
    17. Alexander Coppock & Thomas J. Leeper & Kevin J. Mullinix, 2018. "Generalizability of heterogeneous treatment effect estimates across samples," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 115(49), pages 12441-12446, December.
    18. Kevin E. Levay & Jeremy Freese & James N. Druckman, 2016. "The Demographic and Political Composition of Mechanical Turk Samples," SAGE Open, , vol. 6(1), pages 21582440166, March.
    19. Galizzi, Matteo M. & Machado, Sara R. & Miniaci, Raffaele, 2016. "Temporal stability, cross-validity, and external validity of risk preferences measures: experimental evidence from a UK representative sample," LSE Research Online Documents on Economics 67554, London School of Economics and Political Science, LSE Library.
    20. Antonio A. Arechar & Gordon T. Kraft-Todd & David G. Rand, 2017. "Turking overtime: how participant characteristics and behavior vary over time and day on Amazon Mechanical Turk," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 3(1), pages 1-11, July.
    21. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(3), pages 351-368, July.
    22. Chetan Dave & Catherine Eckel & Cathleen Johnson & Christian Rojas, 2010. "Eliciting risk preferences: When is simple better?," Journal of Risk and Uncertainty, Springer, vol. 41(3), pages 219-243, December.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Maurizio Canavari & Andreas C. Drichoutis & Jayson L. Lusk & Rodolfo M. Nayga, Jr., 2018. "How to run an experimental auction: A review of recent advances," Working Papers 2018-5, Agricultural University of Athens, Department Of Agricultural Economics.
    2. Abel Brodeur, Nikolai M. Cook, Anthony Heyes, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell Us about Publication Bias and p-Hacking in Online Experiments," LCERPA Working Papers am0133, Laurier Centre for Economic Research and Policy Analysis.
    3. Johannes G. Jaspersen & Marc A. Ragin & Justin R. Sydnor, 2022. "Insurance demand experiments: Comparing crowdworking to the lab," Journal of Risk & Insurance, The American Risk and Insurance Association, vol. 89(4), pages 1077-1107, December.
    4. John Gibson & David Johnson, 0. "Breaking Bad: When Being Disadvantaged Incentivizes (Seemingly) Risky Behavior," Eastern Economic Journal, Palgrave Macmillan;Eastern Economic Association, vol. 0, pages 1-28.
    5. Brodeur, Abel & Cook, Nikolai & Heyes, Anthony, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell us about p-Hacking and Publication Bias in Online Experiments," GLO Discussion Paper Series 1157, Global Labor Organization (GLO).
    6. Charness, Gary & Dao, Lien & Shurchkov, Olga, 2022. "Competing now and then: The effects of delay on competitiveness across gender," Journal of Economic Behavior & Organization, Elsevier, vol. 198(C), pages 612-630.
    7. Scott Simon Boddery & Damon Cann & Laura Moyer & Jeff Yates, 2023. "The role of cable news hosts in public support for Supreme Court decisions," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 20(4), pages 1045-1069, December.
    8. Peilu Zhang & Marco A. Palma, 2021. "Compulsory Versus Voluntary Insurance: An Online Experiment," American Journal of Agricultural Economics, John Wiley & Sons, vol. 103(1), pages 106-125, January.
    9. Rene Schwaiger & Laura Hueber, 2021. "Do MTurkers Exhibit Myopic Loss Aversion?," Working Papers 2021-12, Faculty of Economics and Statistics, Universität Innsbruck.
    10. Karl van der Schyff & Greg Foster & Karen Renaud & Stephen Flowerday, 2023. "Online Privacy Fatigue: A Scoping Review and Research Agenda," Future Internet, MDPI, vol. 15(5), pages 1-31, April.
    11. Haas, Nicholas & Hassan, Mazen & Mansour, Sarah & Morton, Rebecca B., 2021. "Polarizing information and support for reform," Journal of Economic Behavior & Organization, Elsevier, vol. 185(C), pages 883-901.
    12. Kaitlynn Sandstrom‐Mistry & Frank Lupi & Hyunjung Kim & Joseph A. Herriges, 2023. "Comparing water quality valuation across probability and non‐probability samples," Applied Economic Perspectives and Policy, John Wiley & Sons, vol. 45(2), pages 744-761, June.
    13. Dominik J. Wettstein & Stefan Boes, 2020. "The impact of reimbursement negotiations on cost and availability of new pharmaceuticals: evidence from an online experiment," Health Economics Review, Springer, vol. 10(1), pages 1-15, December.
    14. David Chavanne & Zak Danz & Jitu Dribssa & Rachel Powell & Matthew Sambor, 2022. "Context and the Perceived Fairness of Price Increases Coming out of COVID‐19," Social Science Quarterly, Southwestern Social Science Association, vol. 103(1), pages 55-68, January.
    15. John Gibson & David Johnson, 2021. "Breaking Bad: When Being Disadvantaged Incentivizes (Seemingly) Risky Behavior," Eastern Economic Journal, Palgrave Macmillan;Eastern Economic Association, vol. 47(1), pages 107-134, January.
    16. Luke Fowler & Stephen Utych, 2021. "Are people better employees than machines? Dehumanizing language and employee performance appraisals," Social Science Quarterly, Southwestern Social Science Association, vol. 102(4), pages 2006-2019, July.
    17. Abhari, Kaveh & McGuckin, Summer, 2023. "Limiting factors of open innovation organizations: A case of social product development and research agenda," Technovation, Elsevier, vol. 119(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Tamás Csermely & Alexander Rabas, 2016. "How to reveal people’s preferences: Comparing time consistency and predictive power of multiple price list risk elicitation methods," Journal of Risk and Uncertainty, Springer, vol. 53(2), pages 107-136, December.
    2. Giuseppe Attanasi & Nikolaos Georgantzís & Valentina Rotondi & Daria Vigani, 2018. "Lottery- and survey-based risk attitudes linked through a multichoice elicitation task," Theory and Decision, Springer, vol. 84(3), pages 341-372, May.
    3. Joshua Tasoff & Wenjie Zhang, 2022. "The Performance of Time-Preference and Risk-Preference Measures in Surveys," Management Science, INFORMS, vol. 68(2), pages 1149-1173, February.
    4. Gary Charness & Thomas Garcia & Theo Offerman & Marie Claire Villeval, 2020. "Do measures of risk attitude in the laboratory predict behavior under risk in and outside of the laboratory?," Journal of Risk and Uncertainty, Springer, vol. 60(2), pages 99-123, April.
    5. Ranganathan, Kavitha & Lejarraga, Tomás, 2021. "Elicitation of risk preferences through satisficing," Journal of Behavioral and Experimental Finance, Elsevier, vol. 32(C).
    6. Caferra, Rocco & Morone, Andrea & Pierno, Donato, 2024. "From Measurements to Measures: Learning Risk Preferences under Different Risk Elicitation Methods," MPRA Paper 121590, University Library of Munich, Germany.
    7. Naranjo, Maria A. & Alpízar, Francisco & Martinsson, Peter, 2019. "Alternatives for Risk Elicitation in the Field: Evidence from Coffee Farmers in Costa Rica," EfD Discussion Paper 19-21, Environment for Development, University of Gothenburg.
    8. Eriksen, Kristoffer W. & Kvaløy, Ola & Luzuriaga, Miguel, 2020. "Risk-taking on behalf of others," Journal of Behavioral and Experimental Finance, Elsevier, vol. 26(C).
    9. Rafaï, Ismaël & Blayac, Thierry & Dubois, Dimitri & Duchêne, Sébastien & Nguyen-Van, Phu & Ventelou, Bruno & Willinger, Marc, 2023. "Stated preferences outperform elicited preferences for predicting reported compliance with COVID-19 prophylactic measures," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 107(C).
    10. Fabien Perez & Guillaume Hollard & Radu Vranceanu, 2021. "How serious is the measurement-error problem in risk-aversion tasks?," Journal of Risk and Uncertainty, Springer, vol. 63(3), pages 319-342, December.
    11. Menkhoff, Lukas & Sakha, Sahra, 2017. "Estimating risky behavior with multiple-item risk measures," Journal of Economic Psychology, Elsevier, vol. 59(C), pages 59-86.
    12. Binder, Carola Conces, 2022. "Time-of-day and day-of-week variations in Amazon Mechanical Turk survey responses," Journal of Macroeconomics, Elsevier, vol. 71(C).
    13. Paolo Crosetto & Antonio Filippin, 2023. "Safe options and gender differences in risk attitudes," Journal of Risk and Uncertainty, Springer, vol. 66(1), pages 19-46, February.
    14. Andrea Hackethal & Michael Kirchler & Christine Laudenbach & Michael Razen & Annika Weber, 2020. "On the role of monetary incentives in risk preference elicitation experiments," Working Papers 2020-29, Faculty of Economics and Statistics, Universität Innsbruck.
    15. Bruns, Selina JK & Hermann, Daniel & Musshoff, Oliver, 2022. "Is gamification a curse or blessing for the design of risk elicitation methods in the field? Experimental evidence from Cambodian smallholder farmers," 2022 Annual Meeting, July 31-August 2, Anaheim, California 322263, Agricultural and Applied Economics Association.
    16. Catherine C. Eckel, 2019. "Measuring individual risk preferences," IZA World of Labor, Institute of Labor Economics (IZA), pages 454-454, June.
    17. Hackethal, Andreas & Kirchler, Michael & Laudenbach, Christine & Razen, Michael & Weber, Annika, 2021. "On the role of monetary incentives in risk preference elicitation experiments," SAFE Working Paper Series 286, Leibniz Institute for Financial Research SAFE, revised 2021.
    18. David B. Johnson & Matthew D. Webb, 2017. "An Experimental Test of the No Safety Schools Theorem," Carleton Economic Papers 17-10, Carleton University, Department of Economics.
    19. Collier, Trevor & Cotten, Stephen & Roush, Justin, 2022. "Using pandemic behavior to test the external validity of laboratory measurements of risk aversion and guilt," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 101(C).
    20. Galizzi, Matteo M. & Machado, Sara R. & Miniaci, Raffaele, 2016. "Temporal stability, cross-validity, and external validity of risk preferences measures: experimental evidence from a UK representative sample," LSE Research Online Documents on Economics 67554, London School of Economics and Political Science, LSE Library.

    More about this item

    JEL classification:

    • C81 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Methodology for Collecting, Estimating, and Organizing Microeconomic Data; Data Access
    • C89 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Other
    • C90 - Mathematical and Quantitative Methods - - Design of Experiments - - - General
    • C99 - Mathematical and Quantitative Methods - - Design of Experiments - - - Other

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wly:soecon:v:87:y:2020:i:1:p:369-385. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: https://doi.org/10.1002/(ISSN)2325-8012 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.