The Desperate Need for Replications
AbstractAn overemphasis on creativity for evaluating research has lead to a serious devaluation of replication studies. However, we need a total sample size of N = 153,669 to estimate a causal effect to two digits, which is quite rare for a single study. The only way to get accurate estimation is to average across replications. If the average sample size were as high as N = 200, we would need over 700 replication studies. Scientific replications are more problematic than pure statistical replications, and so we need even more replications to achieve reasonable accuracy. Copyright 2001 by the University of Chicago.
Download InfoTo our knowledge, this item is not available for download. To find whether it is available, there are three options:
1. Check below under "Related research" whether another version of this item is available online.
2. Check on the provider's web page whether it is in fact available.
3. Perform a search for a similarly titled item that would be available.
Bibliographic InfoArticle provided by University of Chicago Press in its journal Journal of Consumer Research.
Volume (Year): 28 (2001)
Issue (Month): 1 (June)
Contact details of provider:
Web page: http://www.journals.uchicago.edu/JCR/
You can help add them by filling out this form.
CitEc Project, subscribe to its RSS feed for this item.
- Hamermesh, Daniel S., 2007.
"Replication in Economics,"
IZA Discussion Papers
2760, Institute for the Study of Labor (IZA).
- Watson, Verity & Ryan, Mandy, 2007. "Exploring preference anomalies in double bounded contingent valuation," Journal of Health Economics, Elsevier, vol. 26(3), pages 463-482, May.
- Omar Al-Ubaydli & John A. List, 2013. "On the Generalizability of Experimental Results in Economics: With a Response to Commentors," CESifo Working Paper Series 4543, CESifo Group Munich.
- Zacharias Maniadis & Fabio Tufano & John A. List, 2014.
"One Swallow Doesn't Make a Summer: New Evidence on Anchoring Effects,"
American Economic Review,
American Economic Association, vol. 104(1), pages 277-90, January.
- Zacharias Maniadis & Fabio Tufano & John List, 2013. "One Swallow Does not Make a Summer: New Evidence on Anchoring Effects," Levine's Working Paper Archive 786969000000000824, David K. Levine.
- Hubbard, Raymond & Lindsay, R. Murray, 2013. "From significant difference to significant sameness: Proposing a paradigm shift in business research," Journal of Business Research, Elsevier, vol. 66(9), pages 1377-1388.
- Auh, Seigyoung & Johnson, Michael D., 2005. "Compatibility effects in evaluations of satisfaction and loyalty," Journal of Economic Psychology, Elsevier, vol. 26(1), pages 35-57, February.
- Evanschitzky, Heiner & Armstrong, J. Scott, 2013. "Research with In-built replications: Comment and further suggestions for replication research," Journal of Business Research, Elsevier, vol. 66(9), pages 1406-1408.
- Evanschitzky, Heiner & Armstrong, J. Scott, 2010. "Replications of forecasting research," International Journal of Forecasting, Elsevier, vol. 26(1), pages 4-8, January.
- Zacharias Maniadis & Fabio Tufano & John A List, 2013. "One Swallow Doesnâ€™t Make a Summer: New Evidence on Anchoring Effects," Discussion Papers 2013-07, The Centre for Decision Research and Experimental Economics, School of Economics, University of Nottingham.
- Walsh, Gianfranco & Beatty, Sharon E. & Shiu, Edward M.K., 2009. "The customer-based corporate reputation scale: Replication and short form," Journal of Business Research, Elsevier, vol. 62(10), pages 924-930, October.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Journals Division).
If references are entirely missing, you can add them using this form.