IDEAS home Printed from https://ideas.repec.org/a/cup/inorps/v9y2016i01p3-22_00.html
   My bibliography  Save this article

Situational Judgment Tests: From Measures of Situational Judgment to Measures of General Domain Knowledge

Author

Listed:
  • Lievens, Filip
  • Motowidlo, Stephan J.

Abstract

Situational judgment tests (SJTs) are typically conceptualized as contextualized selection procedures that capture candidate responses to a set of relevant job situations as a basis for prediction. SJTs share their sample-based and contextualized approach with work samples and assessment center exercises, although they differ from these other simulations by presenting the situations in a low-fidelity (e.g., written) format. In addition, SJTs do not require candidates to respond through actual behavior because they capture candidates’ situational judgment via a multiple-choice response format. Accordingly, SJTs have also been labeled low-fidelity simulations. This SJT paradigm has been very successful: In the last 2 decades, scientific interest in SJTs has grown, and they have made rapid inroads in practice as attractive, versatile, and valid selection procedures. Contrary to their popularity and the voluminous research on their criterion-related validity, however, there has been little attention to developing a theory of why SJTs work. Similarly, in SJT development, often little emphasis is placed on measuring clear and explicit constructs. Therefore, Landy (2007) referred to SJTs as “psychometric alchemy†(p. 418).

Suggested Citation

  • Lievens, Filip & Motowidlo, Stephan J., 2016. "Situational Judgment Tests: From Measures of Situational Judgment to Measures of General Domain Knowledge," Industrial and Organizational Psychology, Cambridge University Press, vol. 9(1), pages 3-22, March.
  • Handle: RePEc:cup:inorps:v:9:y:2016:i:01:p:3-22_00
    as

    Download full text from publisher

    File URL: https://www.cambridge.org/core/product/identifier/S1754942615000711/type/journal_article
    File Function: link to article abstract page
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Gabriel Olaru & Jeremy Burrus & Carolyn MacCann & Franklin M Zaromb & Oliver Wilhelm & Richard D Roberts, 2019. "Situational Judgment Tests as a method for measuring personality: Development and validity evidence for a test of Dependability," PLOS ONE, Public Library of Science, vol. 14(2), pages 1-19, February.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:cup:inorps:v:9:y:2016:i:01:p:3-22_00. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Kirk Stebbing (email available below). General contact details of provider: https://www.cambridge.org/iop .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.