Considering the Major Arguments Against Random Assignment: An Analysis of the Intellectual Culture Surrounding Evaluation in American Schools of Education
This paper notes the very low incidence of randomized experiments in research on school reform and claims that the few that have been done are the product of researchers in fields other than education. It seeks to probe why scholars teaching in schools of education do not do experiments. Eight arguments against random assignment that such researchers invoke are then examined. All are rejected, though some are acknowledged to have considerable merit. Taken together, they do not constitute a case strong enough to reject random assignment. But they do suggest two things of importance: randomized experiments do not provide a "gold standard" for causal inference‹they are merely more efficient and credible than their non-experimental alternatives; and experiments in education should empirically examine not just the treatment-outcome relationship but also the theory of the reform, the quality of treatment implementation, and the processes presumed to mediate effects, whether done quantitatively or qualitatively. Even so, it will not be easy to get researchers from schools of education to do more experiments so long as the metaphor guiding research and development in that field highlights schools as complex, social organizations. This leads to schools being studied with the tools that sociologists of organizations and political scientists traditionally use to examine organizational development‹intensive case studies. It also leads to recommendations for change being made in the way that is traditionally associated with manage-ment consultants (i.e., intensive case knowledge is linked to which-ever theories of organizational development seem appropriate to the specifics of the school on hand). This model is quite different from the more explicitly decision-theoretic model of research and develop-ment that operates in medicine, public health, or agriculture, and that aspires to more general knowledge about the causal effects of interventions.
|Date of creation:|
|Date of revision:|
|Contact details of provider:|| Postal: 2040 Sheridan Road, Evanston, IL 60208-4100|
Web page: http://www.nwu.edu/IPR/publications/wpindex1.html
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-20, September.
- St. Pierre, Robert G. & Cook, Thomas D. & Straw, Roger B., 1981. "An evaluation of the nutrition education and training program : Findings from Nebraska," Evaluation and Program Planning, Elsevier, vol. 4(3-4), pages 335-344, January.
- Cecilia Elena Rouse, 1998. "Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program," The Quarterly Journal of Economics, Oxford University Press, vol. 113(2), pages 553-602.
When requesting a correction, please mention this item's handle: RePEc:wop:nwuipr:99-2. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Thomas Krichel)
If references are entirely missing, you can add them using this form.