IDEAS home Printed from https://ideas.repec.org/p/bss/wpaper/18.html
   My bibliography  Save this paper

More Is Not Always Better: An Experimental Individual-Level Validation of the Randomized Response Technique and the Crosswise Model

Author

Listed:
  • Marc Höglinger
  • Ben Jann

Abstract

Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may therefore be subject to considerable misreporting. To mitigate such misreporting, various indirect techniques for asking sensitive questions, such as the randomized response technique (RRT), have been proposed in the literature. In our study, we evaluate the viability of several variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6,505). Our results indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT, we do observe a reduction of false negatives (that is, an increase in the proportion of cheaters who admit having cheated). At the same time, however, there is an increase in false positives (that is, an increase in non-cheaters who falsely admit having cheated). Overall, our findings suggest that none of the implemented sensitive questions techniques substantially outperforms direct questioning. Furthermore, our study demonstrates the importance of distinguishing false negatives and false positives when evaluating the validity of sensitive question techniques.

Suggested Citation

  • Marc Höglinger & Ben Jann, 2016. "More Is Not Always Better: An Experimental Individual-Level Validation of the Randomized Response Technique and the Crosswise Model," University of Bern Social Sciences Working Papers 18, University of Bern, Department of Social Sciences.
  • Handle: RePEc:bss:wpaper:18
    as

    Download full text from publisher

    File URL: https://boris.unibe.ch/81526/1/Hoeglinger-Jann-2016-MTurk.pdf
    File Function: working paper
    Download Restriction: no

    File URL: https://boris.unibe.ch/81526/2/Hoeglinger-Jann-2016-MTurk-Analysis.pdf
    File Function: documentation of analysis (log files)
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. John Horton & David Rand & Richard Zeckhauser, 2011. "The online laboratory: conducting experiments in a real labor market," Experimental Economics, Springer;Economic Science Association, vol. 14(3), pages 399-425, September.
    2. Brañas-Garza, Pablo & Capraro, Valerio & Rascón-Ramírez, Ericka, 2018. "Gender differences in altruism on Mechanical Turk: Expectations and actual behaviour," Economics Letters, Elsevier, vol. 170(C), pages 19-23.
    3. Kundt, Thorben, 2014. "Applying “Benford’s law” to the Crosswise Model: Findings from an online survey on tax evasion," Working Paper 148/2014, Helmut Schmidt University, Hamburg.
    4. Urs Fischbacher & Franziska Föllmi-Heusi, 2013. "Lies In Disguise—An Experimental Study On Cheating," Journal of the European Economic Association, European Economic Association, vol. 11(3), pages 525-547, June.
    5. Ulf Böckenholt & Sema Barlas & Peter G. M. van der Heijden, 2009. "Do randomized‐response designs eliminate response biases? An empirical study of non‐compliance behavior," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 24(3), pages 377-392, April.
    6. Jun-Wu Yu & Guo-Liang Tian & Man-Lai Tang, 2008. "Two new models for survey sampling with sensitive characteristic: design and analysis," Metrika: International Journal for Theoretical and Applied Statistics, Springer, vol. 67(3), pages 251-263, April.
    7. Marc Höglinger & Ben Jann & Andreas Diekmann, 2014. "Sensitive Questions in Online Surveys: An Experimental Evaluation of the Randomized Response Technique and the Crosswise Model," University of Bern Social Sciences Working Papers 9, University of Bern, Department of Social Sciences, revised 24 Jun 2014.
    8. Andreas Diekmann, 2012. "Making Use of “Benford’s Law†for the Randomized Response Technique," Sociological Methods & Research, , vol. 41(2), pages 325-334, May.
    9. Elisabeth Coutts & Ben Jann, 2011. "Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT)," Sociological Methods & Research, , vol. 40(1), pages 169-193, February.
    10. Korndörfer, Martin & Krumpal, Ivar & Schmukle, Stefan C., 2014. "Measuring and explaining tax evasion: Improving self-reports using the crosswise model," Journal of Economic Psychology, Elsevier, vol. 45(C), pages 18-32.
    11. Kundt, Thorben C. & Misch, Florian & Nerré, Birger, 2013. "Re-assessing the merits of measuring tax evasions through surveys: Evidence from Serbian firms," ZEW Discussion Papers 13-047, ZEW - Leibniz Centre for European Economic Research.
    12. Kirchner Antje, 2015. "Validating Sensitive Questions: A Comparison of Survey and Register Data," Journal of Official Statistics, Sciendo, vol. 31(1), pages 31-59, March.
    13. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(3), pages 351-368, July.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ulrich Thy Jensen, 2020. "Is self-reported social distancing susceptible to social desirability bias? Using the crosswise model to elicit sensitive behaviors," Journal of Behavioral Public Administration, Center for Experimental and Behavioral Public Administration, vol. 3(2).
    2. Adetola Adedamola Adediran & Femi Barnabas Adebola & Olusegun Sunday Ewemooje, 2020. "Unbiased estimator modeling in unrelated dichotomous randomized response," Statistics in Transition New Series, Polish Statistical Association, vol. 21(5), pages 119-132, December.
    3. Höglinger, Marc & Diekmann, Andreas, 2017. "Uncovering a Blind Spot in Sensitive Question Research: False Positives Undermine the Crosswise-Model RRT," Political Analysis, Cambridge University Press, vol. 25(1), pages 131-137, January.
    4. Daoust, Jean-François & Bélanger, Éric & Dassonneville, Ruth & Lachapelle, Erick & Nadeau, Richard & Becher, Michael & Brouard, Sylvain & Foucault, Martial & Hönnige, Christoph & Stegmueller, Daniel, 2020. "Face-Saving Strategies Increase Self-Reported Non-Compliance with COVID-19 Preventive Measures: Experimental Evidence from 12 Countries," SocArXiv tkrs7, Center for Open Science.
    5. Ivar Krumpal & Thomas Voss, 2020. "Sensitive Questions and Trust: Explaining Respondents’ Behavior in Randomized Response Surveys," SAGE Open, , vol. 10(3), pages 21582440209, July.
    6. Chuang, Erica & Dupas, Pascaline & Huillery, Elise & Seban, Juliette, 2021. "Sex, lies, and measurement: Consistency tests for indirect response survey methods," Journal of Development Economics, Elsevier, vol. 148(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Andreas Quatember, 2019. "A discussion of the two different aspects of privacy protection in indirect questioning designs," Quality & Quantity: International Journal of Methodology, Springer, vol. 53(1), pages 269-282, January.
    2. Korndörfer, Martin & Krumpal, Ivar & Schmukle, Stefan C., 2014. "Measuring and explaining tax evasion: Improving self-reports using the crosswise model," Journal of Economic Psychology, Elsevier, vol. 45(C), pages 18-32.
    3. Dato, Simon & Feess, Eberhard & Nieken, Petra, 2019. "Lying and reciprocity," Games and Economic Behavior, Elsevier, vol. 118(C), pages 193-218.
    4. Valerio Capraro, 2018. "Gender differences in lying in sender-receiver games: A meta-analysis," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 13(4), pages 345-355, July.
    5. Nicolas Jacquemet & Alexander James & Stéphane Luchini & James Murphy & Jason F. Shogren, 2019. "Lying and Shirking Under Oath," Working Papers 19-19, Chapman University, Economic Science Institute.
      • Nicolas Jacquemet & Alexander James & Stéphane Luchini & James J. Murphy & Jason F. Shogren, 2019. "Lying and Shirking Under Oath," Working Papers 2019-02, University of Alaska Anchorage, Department of Economics.
    6. Nicolas Jacquemet & Alexander G James & Stéphane Luchini & James J Murphy & Jason F Shogren, 2021. "Do truth-telling oaths improve honesty in crowd-working?," PLOS ONE, Public Library of Science, vol. 16(1), pages 1-18, January.
    7. Marc Höglinger & Ben Jann, 2016. "MTurk Survey on "Mood and Personality". Documentation," University of Bern Social Sciences Working Papers 17, University of Bern, Department of Social Sciences.
    8. Kundt, Thorben, 2014. "Applying “Benford’s law” to the Crosswise Model: Findings from an online survey on tax evasion," Working Paper 148/2014, Helmut Schmidt University, Hamburg.
    9. Kirchner Antje, 2015. "Validating Sensitive Questions: A Comparison of Survey and Register Data," Journal of Official Statistics, Sciendo, vol. 31(1), pages 31-59, March.
    10. Rebecca R Carter & Analisa DiFeo & Kath Bogie & Guo-Qiang Zhang & Jiayang Sun, 2014. "Crowdsourcing Awareness: Exploration of the Ovarian Cancer Knowledge Gap through Amazon Mechanical Turk," PLOS ONE, Public Library of Science, vol. 9(1), pages 1-10, January.
    11. Heinicke, Franziska & Rosenkranz, Stephanie & Weitzel, Utz, 2019. "The effect of pledges on the distribution of lying behavior: An online experiment," Journal of Economic Psychology, Elsevier, vol. 73(C), pages 136-151.
    12. Brañas-Garza, Pablo & Capraro, Valerio & Rascón-Ramírez, Ericka, 2018. "Gender differences in altruism on Mechanical Turk: Expectations and actual behaviour," Economics Letters, Elsevier, vol. 170(C), pages 19-23.
    13. Angela C M de Oliveira & John M Spraggon & Matthew J Denny, 2016. "Instrumenting Beliefs in Threshold Public Goods," PLOS ONE, Public Library of Science, vol. 11(2), pages 1-15, February.
    14. Masha Shunko & Julie Niederhoff & Yaroslav Rosokha, 2018. "Humans Are Not Machines: The Behavioral Impact of Queueing Design on Service Time," Management Science, INFORMS, vol. 64(1), pages 453-473, January.
    15. Atalay, Kadir & Bakhtiar, Fayzan & Cheung, Stephen & Slonim, Robert, 2014. "Savings and prize-linked savings accounts," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 86-106.
    16. Haas, Nicholas & Hassan, Mazen & Mansour, Sarah & Morton, Rebecca B., 2021. "Polarizing information and support for reform," Journal of Economic Behavior & Organization, Elsevier, vol. 185(C), pages 883-901.
    17. Cantarella, Michele & Strozzi, Chiara, 2019. "Workers in the Crowd: The Labour Market Impact of the Online Platform Economy," IZA Discussion Papers 12327, Institute of Labor Economics (IZA).
    18. Alves, Guillermo & Blanchard, Pablo & Burdin, Gabriel & Chávez, Mariana & Dean, Andres, 2019. "The Economic Preferences of Cooperative Managers," IZA Discussion Papers 12330, Institute of Labor Economics (IZA).
    19. Azzam, Tarek & Harman, Elena, 2016. "Crowdsourcing for quantifying transcripts: An exploratory study," Evaluation and Program Planning, Elsevier, vol. 54(C), pages 63-73.
    20. Chirvi, Malte & Schneider, Cornelius, 2020. "Preferences for wealth taxation: Design, framing and the role of partisanship," arqus Discussion Papers in Quantitative Tax Research 260, arqus - Arbeitskreis Quantitative Steuerlehre.

    More about this item

    Keywords

    Sensitive Questions; Online Survey; Amazon Mechanical Turk; Randomized Response Technique; Crosswise Model; Dice Game; Validation;
    All these keywords.

    JEL classification:

    • C81 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Methodology for Collecting, Estimating, and Organizing Microeconomic Data; Data Access
    • C83 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Survey Methods; Sampling Methods

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bss:wpaper:18. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: . General contact details of provider: http://www.sowi.unibe.ch/ .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Ben Jann (email available below). General contact details of provider: http://www.sowi.unibe.ch/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.