Considering the Major Arguments Against Random Assignment: An Analysis of the Intellectual Culture Surrounding Evaluation in American Schools of Education
AbstractThis paper notes the very low incidence of randomized experiments in research on school reform and claims that the few that have been done are the product of researchers in fields other than education. It seeks to probe why scholars teaching in schools of education do not do experiments. Eight arguments against random assignment that such researchers invoke are then examined. All are rejected, though some are acknowledged to have considerable merit. Taken together, they do not constitute a case strong enough to reject random assignment. But they do suggest two things of importance: randomized experiments do not provide a "gold standard" for causal inference‹they are merely more efficient and credible than their non-experimental alternatives; and experiments in education should empirically examine not just the treatment-outcome relationship but also the theory of the reform, the quality of treatment implementation, and the processes presumed to mediate effects, whether done quantitatively or qualitatively. Even so, it will not be easy to get researchers from schools of education to do more experiments so long as the metaphor guiding research and development in that field highlights schools as complex, social organizations. This leads to schools being studied with the tools that sociologists of organizations and political scientists traditionally use to examine organizational development‹intensive case studies. It also leads to recommendations for change being made in the way that is traditionally associated with manage-ment consultants (i.e., intensive case knowledge is linked to which-ever theories of organizational development seem appropriate to the specifics of the school on hand). This model is quite different from the more explicitly decision-theoretic model of research and develop-ment that operates in medicine, public health, or agriculture, and that aspires to more general knowledge about the causal effects of interventions.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by Institute for Policy Resarch at Northwestern University in its series IPR working papers with number 99-2.
Date of creation:
Date of revision:
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- St. Pierre, Robert G. & Cook, Thomas D. & Straw, Roger B., 1981. "An evaluation of the nutrition education and training program : Findings from Nebraska," Evaluation and Program Planning, Elsevier, vol. 4(3-4), pages 335-344, January.
- LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-20, September.
- Cecilia Elena Rouse, 1998. "Private School Vouchers And Student Achievement: An Evaluation Of The Milwaukee Parental Choice Program," The Quarterly Journal of Economics, MIT Press, vol. 113(2), pages 553-602, May.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Thomas Krichel).
If references are entirely missing, you can add them using this form.