Verification of Citations: Fawlty Towers of Knowledge?
The prevalence of faulty citations impedes the growth of scientific knowledge. Faulty citations include omissions of relevant papers, incorrect references, and quotation errors that misreport findings. We discuss key studies in these areas. We then examine citations to Estimating nonresponse bias in mail surveys, one of the most frequently cited papers from the Journal of Marketing Research, as an exploratory study to illustrate these issues. This paper is especially useful in testing for quotation errors because it provides specific operational recommendations on adjusting for nonresponse bias; therefore, it allows us to determine whether the citing papers properly used the findings. By any number of measures, those doing survey research fail to cite this paper and, presumably, make inadequate adjustments for nonresponse bias. Furthermore, even when the paper was cited, 49 of the 50 studies that we examined reported its findings improperly. The inappropriate use of statistical-significance testing led researchers to conclude that nonresponse bias was not present in 76 percent of the studies in our sample. Only one of the studies in the sample made any adjustment for it. Judging from the original paper, we estimate that the study researchers should have predicted nonresponse bias and adjusted for 148 variables. In this case, the faulty citations seem to have arisen either because the authors did not read the original paper or because they did not fully understand its implications. To address the problem of omissions, we recommend that journals include a section on their websites to list all relevant papers that have been overlooked and show how the omitted paper relates to the published paper. In general, authors should routinely verify the accuracy of their sources by reading the cited papers. For substantive findings, they should attempt to contact the authors for confirmation or clarification of the results and methods. This would also provide them with the opportunity to enquire about other relevant references. Journal editors should require that authors sign statements that they have read the cited papers and, when appropriate, have attempted to verify the citations.
|Date of creation:||Jul 2007|
|Date of revision:|
|Contact details of provider:|| Postal: |
Web page: http://mpra.ub.uni-muenchen.de
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Armstrong, J. Scott, 2007. "Significance tests harm progress in forecasting," International Journal of Forecasting, Elsevier, vol. 23(2), pages 321-327.
- JS Armstrong & Terry Overton, 2005. "Estimating Nonresponse Bias in Mail Surveys," General Economics and Teaching 0502044, EconWPA.
When requesting a correction, please mention this item's handle: RePEc:pra:mprapa:4149. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Ekkehart Schlicht)
If references are entirely missing, you can add them using this form.