Evaluation in the practice of development
Knowledge about development effectiveness is constrained by two factors. First, the project staff in governments and international agencies who decide how much to invest in research on specific interventions are often not well informed about the returns to rigorous evaluation and (even when they are)cannot be expected to take full account of the external benefits to others from new knowledge. This leads to under-investment in evaluative research. Second, while standard methods of impact evaluation are useful, they often leave many questions about development effectiveness unanswered. The paper proposes ten steps for making evaluations more relevant to the needs of practitioners. It is argued that more attention needs to be given to identifying policy-relevant questions (including the case for intervention); that a broader approach should be taken to the problems of internal validity; and that the problems of external validity (including scaling up) merit more attention.
|Date of creation:||01 Mar 2008|
|Date of revision:|
|Contact details of provider:|| Postal: 1818 H Street, N.W., Washington, DC 20433|
Phone: (202) 477-1234
Web page: http://www.worldbank.org/
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:wbk:wbrwps:4547. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Roula I. Yazigi)
If references are entirely missing, you can add them using this form.