IDEAS home Printed from https://ideas.repec.org/a/eee/epplan/v71y2018icp68-82.html
   My bibliography  Save this article

Incorporating public values into evaluative criteria: Using crowdsourcing to identify criteria and standards

Author

Listed:
  • Harman, Elena
  • Azzam, Tarek

Abstract

At its core, evaluation involves the generation of value judgments. These evaluative judgments are based on comparing an evaluand’s performance to what the evaluand is supposed to do (criteria) and how well it is supposed to do it (standards). The aim of this four-phase study was to test whether criteria and standards can be set via crowdsourcing, a potentially cost- and time-effective approach to collecting public opinion data. In the first three phases, participants were presented with a program description, then asked to complete a task to either identify criteria (phase one), weigh criteria (phase two), or set standards (phase three). Phase four found that the crowd-generated criteria were high quality; more specifically, that they were clear and concise, complete, non-overlapping, and realistic. Overall, the study concludes that crowdsourcing has the potential to be used in evaluation for setting stable, high-quality criteria and standards.

Suggested Citation

  • Harman, Elena & Azzam, Tarek, 2018. "Incorporating public values into evaluative criteria: Using crowdsourcing to identify criteria and standards," Evaluation and Program Planning, Elsevier, vol. 71(C), pages 68-82.
  • Handle: RePEc:eee:epplan:v:71:y:2018:i:c:p:68-82
    DOI: 10.1016/j.evalprogplan.2018.08.004
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0149718918300417
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.evalprogplan.2018.08.004?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. John Horton & David Rand & Richard Zeckhauser, 2011. "The online laboratory: conducting experiments in a real labor market," Experimental Economics, Springer;Economic Science Association, vol. 14(3), pages 399-425, September.
    2. repec:cup:judgdm:v:5:y:2010:i:5:p:411-419 is not listed on IDEAS
    3. Azzam, Tarek & Harman, Elena, 2016. "Crowdsourcing for quantifying transcripts: An exploratory study," Evaluation and Program Planning, Elsevier, vol. 54(C), pages 63-73.
    4. Geist, Monica R., 2010. "Using the Delphi method to engage stakeholders: A comparison of two studies," Evaluation and Program Planning, Elsevier, vol. 33(2), pages 147-154, May.
    5. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(3), pages 351-368, July.
    6. Patton, Michael Quinn & Horton, Douglas, 2008. "Utilization-focused evaluation for agricultural innovation," ILAC Briefs 52533, Institutional Learning and Change (ILAC) Initiative.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Teasdale, Rebecca M., 2022. "Representing the values of program participants: Endogenous evaluative criteria," Evaluation and Program Planning, Elsevier, vol. 94(C).
    2. Kazak Jan K. & Hendricks Andreas & Simeunović Nataša, 2019. "Hidden Public Value Identification of Real Estate Management Decisions," Real Estate Management and Valuation, Sciendo, vol. 27(4), pages 96-104, December.
    3. Haeussler, Carolin & Vieth, Sabrina, 2022. "A question worth a million: The expert, the crowd, or myself? An investigation of problem solving," Research Policy, Elsevier, vol. 51(3).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Masha Shunko & Julie Niederhoff & Yaroslav Rosokha, 2018. "Humans Are Not Machines: The Behavioral Impact of Queueing Design on Service Time," Management Science, INFORMS, vol. 64(1), pages 453-473, January.
    2. Brodeur, Abel & Cook, Nikolai & Heyes, Anthony, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell Us about Publication Bias and p-Hacking in Online Experiments," IZA Discussion Papers 15478, Institute of Labor Economics (IZA).
    3. Jeanette A.M.J. Deetlefs & Mathew Chylinski & Andreas Ortmann, 2015. "MTurk ‘Unscrubbed’: Exploring the good, the ‘Super’, and the unreliable on Amazon’s Mechanical Turk," Discussion Papers 2015-20, School of Economics, The University of New South Wales.
    4. Haas, Nicholas & Hassan, Mazen & Mansour, Sarah & Morton, Rebecca B., 2021. "Polarizing information and support for reform," Journal of Economic Behavior & Organization, Elsevier, vol. 185(C), pages 883-901.
    5. Cantarella, Michele & Strozzi, Chiara, 2019. "Workers in the Crowd: The Labour Market Impact of the Online Platform Economy," IZA Discussion Papers 12327, Institute of Labor Economics (IZA).
    6. Atalay, Kadir & Bakhtiar, Fayzan & Cheung, Stephen & Slonim, Robert, 2014. "Savings and prize-linked savings accounts," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 86-106.
    7. Azzam, Tarek & Harman, Elena, 2016. "Crowdsourcing for quantifying transcripts: An exploratory study," Evaluation and Program Planning, Elsevier, vol. 54(C), pages 63-73.
    8. Dato, Simon & Feess, Eberhard & Nieken, Petra, 2019. "Lying and reciprocity," Games and Economic Behavior, Elsevier, vol. 118(C), pages 193-218.
    9. Wladislaw Mill & Cornelius Schneider, 2023. "The Bright Side of Tax Evasion," CESifo Working Paper Series 10615, CESifo.
    10. Gandullia, Luca & Lezzi, Emanuela & Parciasepe, Paolo, 2020. "Replication with MTurk of the experimental design by Gangadharan, Grossman, Jones & Leister (2018): Charitable giving across donor types," Journal of Economic Psychology, Elsevier, vol. 78(C).
    11. Prissé, Benjamin & Jorrat, Diego, 2022. "Lab vs online experiments: No differences," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 100(C).
    12. Valerio Capraro & Hélène Barcelo, 2021. "Punishing defectors and rewarding cooperators: Do people discriminate between genders?," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 7(1), pages 19-32, September.
    13. Lefgren, Lars J. & Sims, David P. & Stoddard, Olga B., 2016. "Effort, luck, and voting for redistribution," Journal of Public Economics, Elsevier, vol. 143(C), pages 89-97.
    14. Jochen Becker & Josip Medjedovic & Christoph Merkle, 2019. "The Effect of CEO Extraversion on Analyst Forecasts: Stereotypes and Similarity Bias," The Financial Review, Eastern Finance Association, vol. 54(1), pages 133-164, February.
    15. Bidhan L. Parmar & Adrian Keevil & Andrew C. Wicks, 2019. "People and Profits: The Impact of Corporate Objectives on Employees’ Need Satisfaction at Work," Journal of Business Ethics, Springer, vol. 154(1), pages 13-33, January.
    16. Alexsandros Cavgias & Raphael Corbi, Luis Meloni, Lucas M. Novaes, 2019. "EDITED DEMOCRACY: Media Manipulation and the News Coverage of Presidential Debates," Working Papers, Department of Economics 2019_17, University of São Paulo (FEA-USP).
    17. Brodeur, Abel & Cook, Nikolai & Heyes, Anthony, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell us about p-Hacking and Publication Bias in Online Experiments," GLO Discussion Paper Series 1157, Global Labor Organization (GLO).
    18. Ola Andersson & Jim Ingebretsen Carlson & Erik Wengström, 2021. "Differences Attract: An Experimental Study of Focusing in Economic Choice," The Economic Journal, Royal Economic Society, vol. 131(639), pages 2671-2692.
    19. Tim Straub & Henner Gimpel & Florian Teschner & Christof Weinhardt, 2015. "How (not) to Incent Crowd Workers," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 57(3), pages 167-179, June.
    20. Nicolas Jacquemet & Alexander G James & Stéphane Luchini & James J Murphy & Jason F Shogren, 2021. "Do truth-telling oaths improve honesty in crowd-working?," PLOS ONE, Public Library of Science, vol. 16(1), pages 1-18, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:epplan:v:71:y:2018:i:c:p:68-82. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/evalprogplan .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.