IDEAS home Printed from https://ideas.repec.org/a/jdm/journl/v6y2011i8p814-820.html
   My bibliography  Save this article

Methodological notes on model comparisons and strategy classification: A falsificationist proposition

Author

Listed:
  • Morten Moshagen
  • Benjamin E. Hilbig

Abstract

Taking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic examples, we show that failure to discriminate between systematic and random error seriously undermines this approach to model evaluation. Second, approaches that treat random versus systematic error appropriately usually rely on relative model fit to infer which model or strategy most likely generated the data. However, the model comparatively yielding the best fit may still be invalid. We demonstrate that taking for granted the vital requirement that a model by itself should adequately describe the data can easily lead to flawed conclusions. Thus, prior to considering the relative discrepancy of competing models, it is necessary to assess their absolute fit and thus, again, attempt falsification. Finally, the scientific value of model fit is discussed from a broader perspective.

Suggested Citation

  • Morten Moshagen & Benjamin E. Hilbig, 2011. "Methodological notes on model comparisons and strategy classification: A falsificationist proposition," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 6(8), pages 814-820, December.
  • Handle: RePEc:jdm:journl:v:6:y:2011:i:8:p:814-820
    as

    Download full text from publisher

    File URL: http://journal.sjdm.org/11/m32/m32.pdf
    Download Restriction: no

    File URL: http://journal.sjdm.org/11/m32/m32.html
    Download Restriction: no

    References listed on IDEAS

    as
    1. Benjamin E. Hilbig, 2008. "One-reason decision making in risky choice? A closer look at the priority heuristic," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 3(6), pages 457-462, August.
    2. Andreas Glöckner & Tilmann Betsch, 2008. "Multiple-Reason Decision Making Based on Automatic Processing," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2008_12, Max Planck Institute for Research on Collective Goods.
    3. Kahneman, Daniel & Tversky, Amos, 1979. "Prospect Theory: An Analysis of Decision under Risk," Econometrica, Econometric Society, vol. 47(2), pages 263-291, March.
    4. Klaus Fiedler, 2010. "How to study cognitive decision algorithms: The case of the priority heuristic," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 5(1), pages 21-32, February.
    5. Benjamin E. Hilbig, 2010. "Precise models deserve precise measures: A methodological dissection," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 5(4), pages 272-284, July.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Andreas Glockner & Arndt Broder, 2014. "Cognitive integration of recognition information and additional cues in memory-based decisions," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 9(1), pages 35-50, January.
    2. Marc Jekel & Andreas Glockner & Arndt Broder & Viktoriya Maydych, 2014. "Approximating rationality under incomplete information: Adaptive inferences for missing cue values based on cue-discrimination," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 9(2), pages 129-147, March.
    3. Benjamin E. Hilbig, 2014. "On the role of recognition in consumer choice: A model comparison," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 9(1), pages 51-57, January.

    More about this item

    Keywords

    falsification; error; model testing; model fit.;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:jdm:journl:v:6:y:2011:i:8:p:814-820. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Jonathan Baron). General contact details of provider: .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.