Improving the Quality of Data and Impact-Evaluation Studies in Developing Countries
AbstractWhile the science of program evaluation has come a tremendous distance in the past couple of decades, measurement error remains a serious concern and its implications are often poorly understood by both data collectors and data analysts. The primary aim here is to offer a type of “back-to-basics” approach to minimizing error in developing country settings, particularly in relation to impact evaluation studies. Overall, the report calls for a two-stage approach to dealing with mismeasurement. In the first stage, researchers should attempt to minimize mismeasurement during data collection, but also incorporate elements into the study that allow them to estimate its overall dimensions and effects on analysis with more confidence. Econometric fixes for mismeasurement—whose purview is limited to a smaller subset of errors—then serve as a secondary line of defense. Such a complementary strategy can help to ensure that decisions are made based on the most accurate empirical evaluations. The main body of the report includes four main sections. Section two discusses in detail many of the problems that can arise in the process of data collection and what is known about how they may affect measurement error. Section three provides a basic introduction to statistical—particularly econometric—methods that have been developed and used to help avoid the most problematic effects of mismeasurement. Section four offers an alternative approach to dealing with measurement error—one that focuses on reducing error at source. It offers pointers to current “best practice” on how to reduce measurement error during data collection, especially as to how those methods relate to evaluation research, and how to incorporate elements into research design that allow researchers to estimate the dimensions of error. Section five focuses on the role of incentives as one possible approach to shifting one particular aspect of error. It uses data from the PROGRESA program to evaluate indirectly the impact of incentives on certain aspects of data quality in Mexico. The report concludes with a short summary and includes a list of ten steps that can be taken to reduce measurement error at source.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by Inter-American Development Bank, Office of Strategic Planning and Development Effectiveness (SPD) in its series SPD Working Papers with number 1002.
Date of creation: May 2010
Date of revision:
development effectiveness; impact evaluation; randomization; survey design; measurement error;
Find related papers by JEL classification:
- C8 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs
- C9 - Mathematical and Quantitative Methods - - Design of Experiments
- I3 - Health, Education, and Welfare - - Welfare, Well-Being, and Poverty
This paper has been announced in the following NEP Reports:
You can help add them by filling out this form.
CitEc Project, subscribe to its RSS feed for this item.
- González-Flores, Mario & Heracleous, Maria & Winters, Paul, 2012. "Leaving the Safety Net: An Analysis of Dropouts in an Urban Conditional Cash Transfer Program," World Development, Elsevier, vol. 40(12), pages 2505-2521.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Monica Bazan).
If references are entirely missing, you can add them using this form.