Improving Development Effectiveness through R&D: Dynamic Learning and Evaluation
Research and development (R&D) is a common process in for-profit organizations. Despite the benefits, it is not routinely practiced in nonprofit organizations, in part because it is difficult to identify the effects of programs that are designed to involve individuals over long periods of time. This paper presents a process by which organizations looking to affect social outcomes can learn from their programs in both the short- and long-run in order to develop the most cost- and impact-effective programs. We call it Dynamic Learning and Evaluation (DLE). DLE is a multi-arm experimental approach to program development that encompasses all stages of the design and implementation process. It combines a clear model of the causal chain of a program with high quality monitoring and impact evaluation. During the initial program development, organizations randomly apply multiple implementation designs and test them against each other using qualitative and administrative data. Once the organization determines a combination of designs that hold the most potential, they then implement these designs in the field and estimate impacts using participant data collection processes. The organization then uses the results to inform the next round of program implementation. They repeat this process over multiple designs for the life of the program and organization. At no point in the lifespan of the organization is this learning process stopped: programs are continually updated using systematic and objective methods to improve their design and impact. We present this process in detail.
|Date of creation:||2013|
|Date of revision:|
|Contact details of provider:|| Postal: Mohrenstraße 58, D-10117 Berlin|
Web page: http://www.diw.de/en
More information through EDIRC
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- James J. Heckman, 2010.
"Building Bridges between Structural and Program Evaluation Approaches to Evaluating Policy,"
Journal of Economic Literature,
American Economic Association, vol. 48(2), pages 356-98, June.
- James J. Heckman, 2010. "Building Bridges Between Structural and Program Evaluation Approaches to Evaluating Policy," NBER Working Papers 16110, National Bureau of Economic Research, Inc.
- Angus Deaton, 2010.
"Instruments, Randomization, and Learning about Development,"
Journal of Economic Literature,
American Economic Association, vol. 48(2), pages 424-55, June.
- Angus Deaton, 2010. "Instruments, randomization, and learning about development," Working Papers 1224, Princeton University, Woodrow Wilson School of Public and International Affairs, Research Program in Development Studies..
- Lant Pritchett & Matt Andrews & Michael Woolcock, 2012.
"Escaping Capability Traps through Problem-Driven Iterative Adaptation (PDIA),"
299, Center for Global Development.
- Andrews, Matt & Pritchett, Lant & Woolcock, Michael, 2013. "Escaping Capability Traps Through Problem Driven Iterative Adaptation (PDIA)," World Development, Elsevier, vol. 51(C), pages 234-244.
- Andrews, Matt & Pritchett, Lant & Woolcock, Michael, 2012. "Escaping Capability Traps Through Problem Driven Iterative Adaptation (PDIA)," Working Paper Series UNU-WIDER Research Paper , World Institute for Development Economic Research (UNU-WIDER).
- Andrews, Matt & Pritchett, Lant & Woolcock, Michael, 2012. "Escaping Capability Traps through Problem-Driven Iterative Adaptation (PDIA)," Working Paper Series rwp12-036, Harvard University, John F. Kennedy School of Government.
- Pritchett, Lant & Andrews, Matthew R. & Woolcock, Michael J., 2012. "Escaping Capability Traps through Problem-Driven Iterative Adaptation (PDIA)," Scholarly Articles 9403175, Harvard Kennedy School of Government.
- Francisco Campos & Aidan Coville & Ana M. Fernandes & Markus Goldstein & David McKenzie, 2013.
"Learning from the Experiments That Never Happened: Lessons from Trying to Conduct Randomized Evaluations of Matching Grant Programs in Africa,"
in: Experiments for Development: Achievements and New Directions
National Bureau of Economic Research, Inc.
- Campos, Francisco & Coville, Aidan & Fernandes, Ana M. & Goldstein, Markus & McKenzie, David, 2012. "Learning from the experiments that never happened : lessons from trying to conduct randomized evaluations of matching grant programs in Africa," Policy Research Working Paper Series 6296, The World Bank.
- repec:unu:wpaper:wp2012-64 is not listed on IDEAS
When requesting a correction, please mention this item's handle: RePEc:diw:diwwpp:dp1325. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Bibliothek)
If references are entirely missing, you can add them using this form.