IDEAS home Printed from https://ideas.repec.org/a/wly/empleg/v9y2012i4p765-794.html

The Effect of Blinded Experts on Juror Verdicts

Author

Listed:
  • Christopher T. Robertson
  • David V. Yokum

Abstract

“Blind expertise” has been proposed as an institutional solution to the problem of bias in expert witness testimony in litigation (Robertson ). At the request of a litigant, an intermediary selects a qualified expert and pays the expert to review a case without knowing which side requested the opinion. This article reports an experiment that tests the hypothesis that, compared to traditional experts, such “blinded experts” will be more persuasive to jurors. A national sample of mock jurors (N = 275) watched an online video of a staged medical malpractice trial, including testimony from two medical experts, one of whom (or neither, in the control condition) was randomly assigned to be a blind expert. We also manipulated whether the judge provided a special jury instruction explaining the blinding concept. Descriptively, the data suggest juror reluctance to impose liability. Despite an experimental design that included negligent medical care, only 46 percent of the jurors found negligence in the control condition, which represents the status quo. Blind experts, testifying on either side, were perceived as significantly more credible, and were more highly persuasive, in that they doubled (or halved) the odds of a favorable verdict, and increased (or decreased) simulated damages awards by over $100,000. The increased damages award appears to be due to jurors hedging their damages awards, which interacted with the blind expert as a driver of certainty. Use of a blind expert may be a rational strategy for litigants, even without judicial intervention in the form of special jury instructions or otherwise.

Suggested Citation

  • Christopher T. Robertson & David V. Yokum, 2012. "The Effect of Blinded Experts on Juror Verdicts," Journal of Empirical Legal Studies, John Wiley & Sons, vol. 9(4), pages 765-794, December.
  • Handle: RePEc:wly:empleg:v:9:y:2012:i:4:p:765-794
    DOI: 10.1111/j.1740-1461.2012.01273.x
    as

    Download full text from publisher

    File URL: https://doi.org/10.1111/j.1740-1461.2012.01273.x
    Download Restriction: no

    File URL: https://libkey.io/10.1111/j.1740-1461.2012.01273.x?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. David M. Studdert & Michelle M. Mello, 2007. "When Tort Resolutions Are "Wrong": Predictors of Discordant Outcomes in Medical Malpractice Litigation," The Journal of Legal Studies, University of Chicago Press, vol. 36(S2), pages 47-78, June.
    2. Gabriele Paolacci & Jesse Chandler & Panagiotis G. Ipeirotis, 2010. "Running experiments on Amazon Mechanical Turk," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 5(5), pages 411-419, August.
    3. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(3), pages 351-368, July.
    4. Kahneman, Daniel & Schkade, David & Sunstein, Cass R, 1998. "Shared Outrage and Erratic Awards: The Psychology of Punitive Damages," Journal of Risk and Uncertainty, Springer, vol. 16(1), pages 49-86, April.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Masha Shunko & Julie Niederhoff & Yaroslav Rosokha, 2018. "Humans Are Not Machines: The Behavioral Impact of Queueing Design on Service Time," Management Science, INFORMS, vol. 64(1), pages 453-473, January.
    2. Haas, Nicholas & Hassan, Mazen & Mansour, Sarah & Morton, Rebecca B., 2021. "Polarizing information and support for reform," Journal of Economic Behavior & Organization, Elsevier, vol. 185(C), pages 883-901.
    3. Cantarella, Michele & Strozzi, Chiara, 2019. "Workers in the Crowd: The Labour Market Impact of the Online Platform Economy," IZA Discussion Papers 12327, Institute of Labor Economics (IZA).
    4. O. Ashton Morgan & John C. Whitehead, 2018. "Willingness to Pay for Soccer Player Development in the United States," Journal of Sports Economics, , vol. 19(2), pages 279-296, February.
    5. Azzam, Tarek & Harman, Elena, 2016. "Crowdsourcing for quantifying transcripts: An exploratory study," Evaluation and Program Planning, Elsevier, vol. 54(C), pages 63-73.
    6. Wladislaw Mill & Cornelius Schneider, 2023. "The Bright Side of Tax Evasion," CESifo Working Paper Series 10615, CESifo.
    7. Gandullia, Luca & Lezzi, Emanuela & Parciasepe, Paolo, 2020. "Replication with MTurk of the experimental design by Gangadharan, Grossman, Jones & Leister (2018): Charitable giving across donor types," Journal of Economic Psychology, Elsevier, vol. 78(C).
    8. Prissé, Benjamin & Jorrat, Diego, 2022. "Lab vs online experiments: No differences," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 100(C).
    9. Min Chung Han, 2021. "Thumbs down on “likes”? The impact of Facebook reactions on online consumers’ nonprofit engagement behavior," International Review on Public and Nonprofit Marketing, Springer;International Association of Public and Non-Profit Marketing, vol. 18(2), pages 255-272, June.
    10. Valerio Capraro & Hélène Barcelo, 2021. "Punishing defectors and rewarding cooperators: Do people discriminate between genders?," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 7(1), pages 19-32, September.
    11. Kaitlynn Sandstrom‐Mistry & Frank Lupi & Hyunjung Kim & Joseph A. Herriges, 2023. "Comparing water quality valuation across probability and non‐probability samples," Applied Economic Perspectives and Policy, John Wiley & Sons, vol. 45(2), pages 744-761, June.
    12. Lefgren, Lars J. & Sims, David P. & Stoddard, Olga B., 2016. "Effort, luck, and voting for redistribution," Journal of Public Economics, Elsevier, vol. 143(C), pages 89-97.
    13. Bhatt, Vipul & Smith, Angela M., 2025. "Overconfidence and performance: Evidence from a simple real-effort task," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 114(C).
    14. Samir Mamadehussene & Francesco Sguera, 2023. "On the Reliability of the BDM Mechanism," Management Science, INFORMS, vol. 69(2), pages 1166-1179, February.
    15. Tim Straub & Henner Gimpel & Florian Teschner & Christof Weinhardt, 2015. "How (not) to Incent Crowd Workers," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 57(3), pages 167-179, June.
    16. Nicolas Jacquemet & Alexander G James & Stéphane Luchini & James J Murphy & Jason F Shogren, 2021. "Do truth-telling oaths improve honesty in crowd-working?," PLOS ONE, Public Library of Science, vol. 16(1), pages 1-18, January.
    17. Hiroki Ozono & Daisuke Nakama, 2022. "Effects of experimental situation on group cooperation and individual performance: Comparing laboratory and online experiments," PLOS ONE, Public Library of Science, vol. 17(4), pages 1-17, April.
    18. Brodeur, Abel & Cook, Nikolai & Heyes, Anthony, 2022. "We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell us about p-Hacking and Publication Bias in Online Experiments," I4R Discussion Paper Series 8, The Institute for Replication (I4R).
    19. Carpiano, Richard M. & Fitz, Nicholas S., 2017. "Public attitudes toward child undervaccination: A randomized experiment on evaluations, stigmatizing orientations, and support for policies," Social Science & Medicine, Elsevier, vol. 185(C), pages 127-136.
    20. Chandler, Dana & Kapelner, Adam, 2013. "Breaking monotony with meaning: Motivation in crowdsourcing markets," Journal of Economic Behavior & Organization, Elsevier, vol. 90(C), pages 123-133.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wly:empleg:v:9:y:2012:i:4:p:765-794. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: https://doi.org/10.1111/(ISSN)1740-1461 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.