IDEAS home Printed from
   My bibliography  Save this paper

An Analysis Of Statistical Errors In Contingent Valuation Surveys


  • Dietz, Brian


Environmental economists using contingent valuation (CV) surveys are often interested in estimating the population willingness to pay (WTP) for some intervention from a sample of individuals randomly drawn from some population of interest. These surveys collect information on bids or WTP amounts and various demographic and socioeconomic characteristics about the respondents that is used to estimate descriptive statistics (such as mean or median WTP) or regression equations (such as bid or WTP functions). Statistical errors can occur when either the framework from which the sample was drawn is ignored or when the sampled elements are used to incorrectly infer relationships for a target. These errors will often cause biased WTP estimates, although the magnitude and direction of the bias are often difficult to identify. In this paper, standard methods for testing and addressing these potential sources of error are presented and alternative, and often less restrictive, methods are presented and applied to a contingent valuation study of environmentally friendly changes in the Glen Canyon Dam's operations (Welsh, et al, 1995). Great strides have been made to reduce the potential biases in CV surveys. For the better part of the last 20 years, research in environmental economics has focused on reducing potential sources of biases such as embedding, information and scope effects. But research on who and what the respondents and nonrespondents represent has often taken a back seat and in practice is either assumed away or simply overlooked (e.g., nonrespondents are assumed to either (i) behave like respondents or (ii) behave in a particular way). Two types of statistical errors associated with this final stage include sampling error and response error (a combination of nonresponse and selection errors). Sampling errors can arise when the sample drawn does not directly reflect the population of interest, most commonly occurring when incorrect sampling and survey methods are used, most notably ignoring sample weights when complex samples are drawn. Sample weights effectively represent the number of individuals in the population that each sampled individual represents. Moreover, sample weights can be used to test for non-ignorable sampling designs that might cause selection errors and can be used to protect against model misspecification. The role of sample weights in the statistical analysis of survey data, however, is subject to considerable debate among theoretical statisticians despite its widespread use by applied statisticians. Response errors occur when sampled elements are used to infer relationships for a population where the sampled elements do not represent the target population. Regardless of the sampling structure, systematic differences between respondents and nonrespondents will usually invalidate population inferences based solely on survey data from respondents. Research in marketing and psychology has determined that nonrespondents will often differ from respondents on demographic characteristics, socioeconomic status, and attitudes and beliefs, especially those related to the survey in question. Additionally, selection error occurs as a result of respondents (and nonrespondents) censoring their bids. Selection errors are likely to occur for two reasons: (i) removing "protest" and other missing bids as if they were outliers, and (ii) nonrespondents censoring themselves. To adjust for these problems, most statistical approaches are forced to assume some sort of model relating the likelihood of response to willingness to pay. After applying the alternative methods to the data from the Glen Canyon Dam Study, this paper will show that the results from these methods differ enough to indicate that whether or not an intervention is deemed worthwhile could depend on the error correction method used. Since the more advanced and less restrictive methods generally provide more accurate results, environmental economists should be well versed in these methods or work closely with survey statisticians to obtain the most accurate estimates of population WTP.

Suggested Citation

  • Dietz, Brian, 2001. "An Analysis Of Statistical Errors In Contingent Valuation Surveys," 2001 Annual meeting, August 5-8, Chicago, IL 20462, American Agricultural Economics Association (New Name 2008: Agricultural and Applied Economics Association).
  • Handle: RePEc:ags:aaea01:20462

    Download full text from publisher

    File URL:
    Download Restriction: no

    References listed on IDEAS

    1. Horowitz, Joel L. & Manski, Charles F., 1998. "Censoring of outcomes and regressors due to survey nonresponse: Identification and estimation using weights and imputations," Journal of Econometrics, Elsevier, vol. 84(1), pages 37-58, May.
    2. Steven F. Edwards & Glen D. Anderson, 1987. "Overlooked Biases in Contingent Valuation Surveys: Some Considerations," Land Economics, University of Wisconsin Press, vol. 63(2), pages 168-178.
    3. Göran Bostedt & Mattias Boman, 1996. "Nonresponse in Contingent Valuation-reducing uncertainty in value inference," Environmental & Resource Economics, Springer;European Association of Environmental and Resource Economists, vol. 8(1), pages 119-124, July.
    4. Mark L. Messonnier & John C. Bergstrom & Christopher M. Cornwell & R. Jeff Teasley & H. Ken Cordell, 2000. "Survey Response-Related Biases in Contingent Valuation: Concepts, Remedies, and Empirical Application to Valuing Aquatic Plant Management," American Journal of Agricultural Economics, Agricultural and Applied Economics Association, vol. 82(2), pages 438-450.
    5. Puhani, Patrick A, 2000. " The Heckman Correction for Sample Selection and Its Critique," Journal of Economic Surveys, Wiley Blackwell, vol. 14(1), pages 53-68, February.
    6. Magee, L. & Robb, A. L. & Burbidge, J. B., 1998. "On the use of sampling weights when estimating regression models with survey data," Journal of Econometrics, Elsevier, vol. 84(2), pages 251-271, June.
    7. A. Meltzer & Peter Ordeshook & Thomas Romer, 1983. "Introduction," Public Choice, Springer, vol. 41(1), pages 1-5, January.
    8. Heckman, James, 2013. "Sample selection bias as a specification error," Applied Econometrics, Publishing House "SINERGIA PRESS", vol. 31(3), pages 129-137.
    9. A. P. Thirlwall, 1983. "Introduction," Journal of Post Keynesian Economics, Taylor & Francis Journals, vol. 5(3), pages 341-344, March.
    Full references (including those not matched with items on IDEAS)

    More about this item


    Research Methods/ Statistical Methods;


    Access and download statistics


    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ags:aaea01:20462. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (AgEcon Search). General contact details of provider: .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.