Using Performance Indicators to Improve the Effectiveness of Welfare-to-Work Programs
This paper argues that it is feasible to develop good indicators of the performance of a particular welfare-to-work program, office, or contractor. Performance indicators can motivate local offices, contractors, and staff to be more effective in achieving the program's goals. Performance indicators can provide information on what program strategies lead to the greatest long-run success. To be most useful, performance indicators must be simple and timely and control for factors other than the program's effectiveness that influence whether welfare recipients "succeed." Program managers and policy makers would like to know the "value added" resulting from a welfare-to-work program. This "value added" is the difference in welfare recipients' lives due to the program, compared to an alternate world in which the program was nonexistent. The program-induced differences of most interest will depend upon our social goals, but might include earnings gains, reduced welfare dependence, improved self-esteem, and better job skills. The value added from a welfare-to-work program is hard to measure because it is costly and time-consuming to figure out what would have happened if the program did not exist. The change in earnings or welfare benefits for welfare recipients, from before to after the program, will be a poor measure of program effects. Many welfare recipients are suffering from temporary problems. On average, a typical group of welfare recipients will over time reduce their welfare dependence and increase their earnings, even without any special assistance. For example, in the Riverside County, California welfare-to-work experiment, the welfare recipients randomly assigned to the "control group" that did not receive any special services more than tripled their earnings over the three-year period after the experiment's start, and less than half were still receiving welfare after three years (Riccio et al., 1994, pp. 323-324). Social experiments in which individuals are randomly assigned to receive services or be denied services can measure the value added of a program. But using social experiments to monitor the performance of local welfare offices requires perpetually running social experiments in every local welfare office. Such widespread experimentation is unlikely. Among other concerns: is it ethical to deny services to some "control group" of welfare recipients on a regular ongoing basis in every local welfare office to help in program management? Performance indicators for a welfare-to-work program are imperfect proxies for the program's value added. Performance indicators may take data on program outcomes and adjust the data so they are correlated with the program's value added. This adjustment may consider factors such as the characteristics of the welfare recipients served by the program or the health of the local economy. The advantage of performance indicators is that they do not require experimental methods. The disadvantage is that performance indicators may sometimes give a misleading impression of the value added of a particular program, welfare office, or contractor. This disadvantage may be reduced by developing better performance indicators and by careful use of performance indicators. Robert Behn, in his book on the Massachusetts ET programs, suggests that performance indicators for welfare-to-work programs may be used for three types of purposes: (1) justifying the overall program; (2) identifying how to improve the program; and (3) motivating better performance by managers and line workers associated with the program (Behn, 1991). Each of these purposes requires a different type of performance indicator. For justifying a welfare-to-work program, a perfectly rational policy maker would want to know the value added from the program, preferably using experimental data. In the real world, a program might be politically justified by any data showing that outcomes improved for welfare recipients, as one would expect even for an unsuccessful program. Good anecdotes might be as politically relevant as quantitative data. To identify how to improve a welfare-to-work program, performance indicators must be positively correlated with program value added, but the value added need not be precisely measured. We need to identify which local offices, contractors, and staff members are more successful, but their value added need not be known precisely. Performance indicators for program improvement must be linked to information on the strategies of particular offices, organizations and staff members, so that we know why a particular program component is successful. To motivate better performance by local offices and staff, we need performance indicators that are timely and understandable. These performance indicators must be easier to increase by increasing the value added of the program rather than by taking other actions that reduce value added. For example, performance indicators for motivational purposes should be easier to increase by improving earnings of welfare recipients, rather than by selecting welfare recipients for the program who will look good on the performance indicators ("creaming"). Welfare recipients with the greatest earnings prospects are not always those who will have the most value added from a welfare-to-work program. Finally, for any performance indicator to help motivate performance, program administrators must have the will to use the performance indicator to allocate some resources. Performance indicators may also be distinguished by which part of the organization's performance is being measured, and by whom. Performance indicators may be used by federal officials to measure state performance, state officials to monitor local welfare office's performance, local offices to monitor contractors, and local offices or contractors to monitor individual staff. As one gets closer to the individual staff level of the welfare system, it becomes easier to use personal interaction and judgment to substitute for quantitative performance indicators. In addition, as one gets closer to the individual staff level, it will be more difficult to consistently interpret more complex performance indicators. Therefore, performance indicators should be simpler and less accurate as we get closer to the individual staff level, and should become more statistically sophisticated at higher levels in the welfare system. Thus, we need a variety of performance indicators, ranging from quite simple, timely, but rough proxies for value added to more sophisticated, accurate, and complex approximations to value added. The questions addressed by this paper are twofold: (1) how can we develop better performance indicators that will be both feasible and closely correlated with value added, and (2) once such measures are developed, how should they appropriately be used for program management? These questions will be addressed based on experience with performance indicators in job training and welfare-to-work programs, and other previous research on welfare and the labor market.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
|Date of creation:||1995|
|Contact details of provider:|| Postal: 300 S. Westnedge Ave. Kalamazoo, MI 49007 USA|
Web page: http://www.upjohn.org
More information through EDIRC
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Amy Zambrowski & Anne Gordon, "undated". "Evaluation of the Minority Female Single Parent Demonstration: Fifth-Year Impacts at CET," Mathematica Policy Research Reports 743e24a57e9c4d98b4b47b681, Mathematica Policy Research.