Design of Web Questionnaires: A Test for Number of Items per Screen
AbstractThis paper presents results from an experimental manipulation of one versus multiple-items per screen format in a Web survey.The purpose of the experiment was to find out if a questionnaire s format influences how respondents provide answers in online questionnaires and if this is depending on personal characteristics.Four different formats were used, varying the number of items on a screen (1, 4, 10, and 40 items).To test how robust the results were, and to find out whether or not a specific format shows more deviation in answer scores, the experiment was repeated.We found that mean scores, variances and correlations do not differ much in the different formats.In addition, formats show the same deviation of item scores between repeated experiments.In relation to non-response error, we found that the more items appear on a single screen, the higher the number of people with one or more missing values.Placing more items on a single screen a) shortens the duration of the interview, b) negatively influences the respondent's evaluation of the duration of the interview, c) negatively influences the respondent's evaluation of the layout, and d) increases the difficulty in completing the interview.We also found that scrolling negatively influences the evaluation of a questionnaire's layout. Furthermore, the results show that differences between formats are influenced by personal characteristics.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by Tilburg University, Center for Economic Research in its series Discussion Paper with number 2005-114.
Date of creation: 2005
Date of revision:
Contact details of provider:
Web page: http://center.uvt.nl
questionnaires; error analysis; web surveys; questionnaire design; measurement errors; non-response errors;
Find related papers by JEL classification:
- C42 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods: Special Topics - - - Survey Methods
- C81 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Methodology for Collecting, Estimating, and Organizing Microeconomic Data; Data Access
- C93 - Mathematical and Quantitative Methods - - Design of Experiments - - - Field Experiments
This paper has been announced in the following NEP Reports:
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Elisabeth Deutskens & Ko de Ruyter & Martin Wetzels & Paul Oosterveld, 2004. "Response Rate and Response Quality of Internet-Based Surveys: An Experimental Study," Marketing Letters, Springer, vol. 15(1), pages 21-36, 02.
- Toepoel, V. & Das, J.W.M. & Soest, A.H.O. van, 2008. "Design Effects in Web Surveys: Comparing Trained and Fresh Respondents," Discussion Paper 2008-51, Tilburg University, Center for Economic Research.
- Toepoel, V. & Dillman, D.A., 2008. "Words, Numbers and Visual Heuristics in Web Surveys: Is there a Hierarchy of Importance?," Discussion Paper 2008-92, Tilburg University, Center for Economic Research.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Richard Broekman).
If references are entirely missing, you can add them using this form.