Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT)
Gaining valid answers to so-called sensitive questions is an age-old problem in survey research. Various techniques have been developed to guarantee anonymity and minimize the respondentâ€™s feelings of jeopardy. Two such techniques are the randomized response technique (RRT) and the unmatched count technique (UCT). In this study the authors evaluate the effectiveness of different implementations of the RRT (using a forced-response design) in a computer-assisted setting and also compare the use of the RRT to that of the UCT. The techniques are evaluated according to various quality criteria, such as the prevalence estimates they provide, the ease of their use, and respondent trust in the techniques. The results indicate that the RRTs are problematic with respect to several domains, such as the limited trust they inspire and nonresponse, and that the RRT estimates are unreliable due to a strong false no bias, especially for the more sensitive questions. The UCT, however, performed well compared to the RRTs on all the evaluated measures. The authors conclude that the UCT is a promising alternative to RRT in self-administered surveys and that future research should be directed toward evaluating and improving the technique.
When requesting a correction, please mention this item's handle: RePEc:sae:somere:v:40:y:2011:i:1:p:169-193. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (SAGE Publishing)
If references are entirely missing, you can add them using this form.