Methodological notes on model comparisons and strategy classification: A falsificationist proposition
AbstractTaking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic examples, we show that failure to discriminate between systematic and random error seriously undermines this approach to model evaluation. Second, approaches that treat random versus systematic error appropriately usually rely on relative model fit to infer which model or strategy most likely generated the data. However, the model comparatively yielding the best fit may still be invalid. We demonstrate that taking for granted the vital requirement that a model by itself should adequately describe the data can easily lead to flawed conclusions. Thus, prior to considering the relative discrepancy of competing models, it is necessary to assess their absolute fit and thus, again, attempt falsification. Finally, the scientific value of model fit is discussed from a broader perspective.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoArticle provided by Society for Judgment and Decision Making in its journal Judgment and Decision Making.
Volume (Year): 6 (2011)
Issue (Month): 8 (December)
Contact details of provider:
falsification; error; model testing; model fit.;
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Klaus Fiedler, 2010. "How to study cognitive decision algorithms: The case of the priority heuristic," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 5(1), pages 21-32, February.
- Benjamin E. Hilbig, 2008. "One-reason decision making in risky choice? A closer look at the priority heuristic," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 3(6), pages 457-462, August.
- Andreas Glöckner & Tilmann Betsch, 2008. "Multiple-Reason Decision Making Based on Automatic Processing," Working Paper Series of the Max Planck Institute for Research on Collective Goods 2008_12, Max Planck Institute for Research on Collective Goods.
- Amos Tversky & Daniel Kahneman, 1979.
"Prospect Theory: An Analysis of Decision under Risk,"
Levine's Working Paper Archive
7656, David K. Levine.
- Kahneman, Daniel & Tversky, Amos, 1979. "Prospect Theory: An Analysis of Decision under Risk," Econometrica, Econometric Society, vol. 47(2), pages 263-91, March.
- Benjamin E. Hilbig, 2010. "Precise models deserve precise measures: A methodological dissection," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 5(4), pages 272-284, July.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Jonathan Baron).
If references are entirely missing, you can add them using this form.