Defining and characterising structural uncertainty in decision analytic models
AbstractAn inappropriate structure for a decision analytic model can potentially invalidate estimates of cost-effectiveness and estimates of the value of further research. However, there are often a number of alternative and credible structural assumptions which can be made. Although it is common practice to acknowledge potential limitations in model structure, there is a lack of clarity about methods to characterize the uncertainty surrounding alternative structural assumptions and their contribution to decision uncertainty. A review of decision models commissioned by the NHS Health Technology Programme was undertaken to identify the types of model uncertainties described in the literature. A second review was undertaken to identify approaches to characterise these uncertainties. The assessment of structural uncertainty has received little attention in the health economics literature. A common method to characterise structural uncertainty is to compute results for each alternative model specification, and to present alternative results as scenario analyses. It is then left to decision maker to assess the credibility of the alternative structures in interpreting the range of results. The review of methods to explicitly characterise structural uncertainty identified two methods: 1) model averaging, where alternative models, with different specifications, are built, and their results averaged, using explicit prior distributions often based on expert opinion and 2) Model selection on the basis of prediction performance or goodness of fit. For a number of reasons these methods are neither appropriate nor desirable methods to characterize structural uncertainty in decision analytic models. When faced with a choice between multiple models, another method can be employed which allows structural uncertainty to be explicitly considered and does not ignore potentially relevant model structures. Uncertainty can be directly characterised (or parameterised) in the model itself. This method is analogous to model averaging on individual or sets of model inputs, but also allows the value of information associated with structural uncertainties to be resolved.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by Centre for Health Economics, University of York in its series Working Papers with number 009cherp.
Length: 25 pages
Date of creation: Mar 2006
Date of revision:
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Granger, C.W.J. & Pesaran, M. H., 1999. "Economic and Statistical Measures of Forecast Accuracy," Cambridge Working Papers in Economics 9910, Faculty of Economics, University of Cambridge.
- Chris McCabe & Simon Dixon, 2000. "Testing the Validity of Cost-Effectiveness Models," PharmacoEconomics, Springer Healthcare | Adis, vol. 17(5), pages 501-513.
- Xiaotong Shen & Hsin-Cheng Huang & Jimmy Ye, 2004. "Inference After Model Selection," Journal of the American Statistical Association, American Statistical Association, vol. 99, pages 751-762, January.
- van Noortwijk, Jan M. & Cooke, Roger M. & Kok, Matthijs, 1995. "A Bayesian failure model based on isotropic deterioration," European Journal of Operational Research, Elsevier, vol. 82(2), pages 270-282, April.
- Claxton, Karl, 1999. "The irrelevance of inference: a decision-making approach to the stochastic evaluation of health care technologies," Journal of Health Economics, Elsevier, vol. 18(3), pages 341-364, June.
- Zhao, Xiande & Xie, Jinxing & Leung, Janny, 2002. "The impact of forecasting model selection on the value of information sharing in a supply chain," European Journal of Operational Research, Elsevier, vol. 142(2), pages 321-344, October.
- Karl Claxton & Mark Sculpher & Chris McCabe & Andrew Briggs & Ron Akehurst & Martin Buxton & John Brazier & Tony O'Hagan, 2005. "Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra," Health Economics, John Wiley & Sons, Ltd., vol. 14(4), pages 339-347.
- Lois G. Kim & Simon G. Thompson, 2010. "Uncertainty and validation of health economic decision models," Health Economics, John Wiley & Sons, Ltd., vol. 19(1), pages 43-55.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Frances Sharp).
If references are entirely missing, you can add them using this form.