IDEAS home Printed from https://ideas.repec.org/p/ehl/lserod/48069.html
   My bibliography  Save this paper

The item count method for sensitive survey questions: modelling criminal behaviour

Author

Listed:
  • Kuha, Jouni
  • Jackson, Jonathan

Abstract

The item count method is a way of asking sensitive survey questions which protects the anonymity of the respondents by randomization before the interview. It can be used to estimate the probability of sensitive behaviour and to model how it depends on explanatory variables. We analyse item count survey data on the illegal behaviour of buying stolen goods. The analysis of an item count question is best formulated as an instance of modelling incomplete categorical data. We propose an efficient implementation of the estimation which also provides explicit variance estimates for the parameters. We then suggest pecifications for the model for the control items, which is an auxiliary but unavoidable part of the analysis of item count data. These considerations and the results of our analysis of criminal behaviour highlight the fact that careful design of the questions is crucial for the success of the item count method.

Suggested Citation

  • Kuha, Jouni & Jackson, Jonathan, 2014. "The item count method for sensitive survey questions: modelling criminal behaviour," LSE Research Online Documents on Economics 48069, London School of Economics and Political Science, LSE Library.
  • Handle: RePEc:ehl:lserod:48069
    as

    Download full text from publisher

    File URL: http://eprints.lse.ac.uk/48069/
    File Function: Open access version.
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Corstange, Daniel, 2009. "Sensitive Questions, Truthful Answers? Modeling the List Experiment with LISTIT," Political Analysis, Cambridge University Press, vol. 17(1), pages 45-63, January.
    2. D. Oakes, 1999. "Direct calculation of the information matrix via the EM," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 61(2), pages 479-482, April.
    3. Elisabeth Coutts & Ben Jann, 2011. "Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT)," Sociological Methods & Research, , vol. 40(1), pages 169-193, February.
    4. Blair, Graeme & Imai, Kosuke, 2012. "Statistical Analysis of List Experiments," Political Analysis, Cambridge University Press, vol. 20(1), pages 47-77, January.
    5. Imai, Kosuke, 2011. "Multivariate Regression Analysis for the Item Count Technique," Journal of the American Statistical Association, American Statistical Association, vol. 106(494), pages 407-416.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Andreas Lagerås & Mathias Lindholm, 2020. "How to ask sensitive multiple‐choice questions," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 47(2), pages 397-424, June.
    2. Jiayuan Li & Wim Van den Noortgate, 2022. "A Meta-analysis of the Relative Effectiveness of the Item Count Technique Compared to Direct Questioning," Sociological Methods & Research, , vol. 51(2), pages 760-799, May.
    3. David Boto‐García & Federico Perali, 2024. "The association between marital locus of control and break‐up intentions," American Journal of Economics and Sociology, Wiley Blackwell, vol. 83(1), pages 35-57, January.
    4. Thorben C. Kundt & Florian Misch & Birger Nerré, 2017. "Re-assessing the merits of measuring tax evasion through business surveys: an application of the crosswise model," International Tax and Public Finance, Springer;International Institute of Public Finance, vol. 24(1), pages 112-133, February.
    5. Jackson, Jonathan & Bradford, Ben & Hough, Mike & Carrillo, Stephany, 2014. "Extending procedural justice theory: a Fiducia report on the design of new survey indicators," LSE Research Online Documents on Economics 62237, London School of Economics and Political Science, LSE Library.
    6. Yonghong An & Pengfei Liu, 2020. "Eliciting Information from Sensitive Survey Questions," Papers 2009.01430, arXiv.org.
    7. Groenitz, Heiko, 2016. "A covariate nonrandomized response model for multicategorical sensitive variables," Computational Statistics & Data Analysis, Elsevier, vol. 103(C), pages 124-138.
    8. Yin Liu & Guo-Liang Tian & Qin Wu & Man-Lai Tang, 2019. "Poisson–Poisson item count techniques for surveys with sensitive discrete quantitative data," Statistical Papers, Springer, vol. 60(5), pages 1763-1791, October.
    9. Alwyn Lim & Shawn Pope, 2022. "What drives companies to do good? A “universal” ordering of corporate social responsibility motivations," Corporate Social Responsibility and Environmental Management, John Wiley & Sons, vol. 29(1), pages 233-255, January.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. De Cao, Elisabetta & Lutz, Clemens, 2014. "Sensitive survey questions," Research Report 14017-EEF, University of Groningen, Research Institute SOM (Systems, Organisations and Management).
    2. S. Rinken & S. Pasadas-del-Amo & M. Rueda & B. Cobo, 2021. "No magic bullet: estimating anti-immigrant sentiment and social desirability bias with the item-count technique," Quality & Quantity: International Journal of Methodology, Springer, vol. 55(6), pages 2139-2159, December.
    3. Leopoldo Fergusson & Carlos Molina & Juan Felipe Riaño, 2018. "I Sell My Vote, and So What? Incidence, Social Bias, and Correlates of Clientelism in Colombia," Economía Journal, The Latin American and Caribbean Economic Association - LACEA, vol. 0(Fall 2018), pages 181-218, November.
    4. Carole Treibich & Aurélia Lépine, 2019. "Estimating misreporting in condom use and its determinants among sex workers: Evidence from the list randomisation method," Health Economics, John Wiley & Sons, Ltd., vol. 28(1), pages 144-160, January.
    5. repec:dgr:rugsom:14017-eef is not listed on IDEAS
    6. Richard Traunmüller & Sara Kijewski & Markus Freitag, 2019. "The Silent Victims of Sexual Violence during War: Evidence from a List Experiment in Sri Lanka," Journal of Conflict Resolution, Peace Science Society (International), vol. 63(9), pages 2015-2042, October.
    7. Tadesse, Getaw & Abate, Gashaw T. & Zewdie, Tadiwos, 2020. "Biases in self-reported food insecurity measurement: A list experiment approach," Food Policy, Elsevier, vol. 92(C).
    8. Elisabetta de Cao & Clemens Lutz, 2015. "Measuring attitudes regarding female genital mutilation through a list experiment," CSAE Working Paper Series 2015-20, Centre for the Study of African Economies, University of Oxford.
    9. Gueorguiev, Dimitar & Malesky, Edmund, 2012. "Foreign investment and bribery: A firm-level analysis of corruption in Vietnam," Journal of Asian Economics, Elsevier, vol. 23(2), pages 111-129.
    10. Ezequiel Gonzalez-Ocantos & Chad Kiewiet de Jonge & Carlos Meléndez & David Nickerson & Javier Osorio, 2020. "Carrots and sticks: Experimental evidence of vote-buying and voter intimidation in Guatemala," Journal of Peace Research, Peace Research Institute Oslo, vol. 57(1), pages 46-61, January.
    11. Lai, Yufeng & Minegishi, Kota & Boaitey, Albert K., 2020. "Social Desirability Bias in Farm Animal Welfare Preference Research," 2020 Annual Meeting, July 26-28, Kansas City, Missouri 304375, Agricultural and Applied Economics Association.
    12. Heiko Groenitz, 2018. "Analyzing efficiency for the multi-category parallel method," METRON, Springer;Sapienza Università di Roma, vol. 76(2), pages 231-250, August.
    13. Chuang, Erica & Dupas, Pascaline & Huillery, Elise & Seban, Juliette, 2021. "Sex, lies, and measurement: Consistency tests for indirect response survey methods," Journal of Development Economics, Elsevier, vol. 148(C).
    14. Katherine B. Coffman & Lucas C. Coffman & Keith M. Marzilli Ericson, 2017. "The Size of the LGBT Population and the Magnitude of Antigay Sentiment Are Substantially Underestimated," Management Science, INFORMS, vol. 63(10), pages 3168-3186, October.
    15. Lépine, Aurélia & Treibich, Carole & D’Exelle, Ben, 2020. "Nothing but the truth: Consistency and efficiency of the list experiment method for the measurement of sensitive health behaviours," Social Science & Medicine, Elsevier, vol. 266(C).
    16. Leonardo Bursztyn & Georgy Egorov & Ruben Enikolopov & Maria Petrova, 2019. "Social Media and Xenophobia: Evidence from Russia," NBER Working Papers 26567, National Bureau of Economic Research, Inc.
    17. Marine JOUVIN, 2021. "Addressing social desirability bias in child labor measurement : an application to cocoa farms in Côte d’Ivoire," Bordeaux Economics Working Papers 2021-08, Bordeaux School of Economics (BSE).
    18. Leopoldo Fergusson & Carlos Molina & Juan Felipe Riaño, 2019. "Consumers as VAT “Evaders”: Incidence, Social Bias, and Correlates in Colombia," Economía Journal, The Latin American and Caribbean Economic Association - LACEA, vol. 0(Spring 20), pages 21-67, April.
    19. Detkova, Polina & Tkachenko, Andrey & Yakovlev, Andrei, 2021. "Gender heterogeneity of bureaucrats in attitude to corruption: Evidence from list experiment," Journal of Economic Behavior & Organization, Elsevier, vol. 189(C), pages 217-233.
    20. Lai, Yufeng & Boaitey, Albert & Minegishi, Kota, 2022. "Behind the veil: Social desirability bias and animal welfare ballot initiatives," Food Policy, Elsevier, vol. 106(C).
    21. M. Niaz Asadullah & Elisabetta De Cao & Fathema Zhura Khatoon & Zahra Siddique, 2021. "Measuring gender attitudes using list experiments," Journal of Population Economics, Springer;European Society for Population Economics, vol. 34(2), pages 367-400, April.

    More about this item

    Keywords

    categorical data analysis; EM algorithm; list experiment; missing information; Newton-Raphson algorithm; randomized response;
    All these keywords.

    JEL classification:

    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ehl:lserod:48069. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: LSERO Manager (email available below). General contact details of provider: https://edirc.repec.org/data/lsepsuk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.