IDEAS home Printed from https://ideas.repec.org/a/gam/jijerp/v20y2023i2p1036-d1027098.html
   My bibliography  Save this article

The Rasch Analysis Shows Poor Construct Validity and Low Reliability of the Quebec User Evaluation of Satisfaction with Assistive Technology 2.0 (QUEST 2.0) Questionnaire

Author

Listed:
  • Antonio Caronni

    (IRCCS Istituto Auxologico Italiano, Department of Neurorehabilitation Sciences, Ospedale San Luca, 20122 Milan, Italy)

  • Marina Ramella

    (IRCCS Fondazione Don Carlo Gnocchi Onlus, 20148 Milan, Italy)

  • Pietro Arcuri

    (IRCCS Fondazione Don Carlo Gnocchi Onlus, 20148 Milan, Italy)

  • Claudia Salatino

    (IRCCS Fondazione Don Carlo Gnocchi Onlus, 20148 Milan, Italy)

  • Lucia Pigini

    (IRCCS Fondazione Don Carlo Gnocchi Onlus, 20148 Milan, Italy)

  • Maurizio Saruggia

    (IRCCS Fondazione Don Carlo Gnocchi Onlus, 20148 Milan, Italy)

  • Chiara Folini

    (IRCCS Fondazione Don Carlo Gnocchi Onlus, 20148 Milan, Italy)

  • Stefano Scarano

    (IRCCS Istituto Auxologico Italiano, Department of Neurorehabilitation Sciences, Ospedale San Luca, 20122 Milan, Italy
    Department of Biomedical Sciences for Health, Università Degli Studi di Milano, 20129 Milan, Italy)

  • Rosa Maria Converti

    (IRCCS Fondazione Don Carlo Gnocchi Onlus, 20148 Milan, Italy)

Abstract

This study aims to test the construct validity and reliability of the Quebec User Evaluation of Satisfaction with assistive Technology 2.0 (QUEST)–device, an eight-item questionnaire for measuring satisfaction with assistive devices. We collected 250 questionnaires from 79 patients and 32 caregivers. One QUEST was completed for each assistive device. Five assistive device types were included. QUEST was tested with the Rasch analysis (Many-Facet Rating Scale Model: persons, items, and device type). Most patients were affected by neurological disabilities, and most questionnaires were about mobility devices. All items fitted the Rasch model (InfitMS range: 0.88–1.1; OutfitMS: 0.84–1.28). However, the ceiling effect of the questionnaire was large (15/111 participants totalled the maximum score), its targeting poor (respondents mean measure: 1.90 logits), and its reliability was 0.71. The device classes had different calibrations (range: −1.18 to 1.26 logits), and item 3 functioned differently in patients and caregivers. QUEST satisfaction measures have low reliability and weak construct validity. Lacking invariance, the QUEST total score is unsuitable for comparing the satisfaction levels of users of different device types. The differential item functioning suggests that the QUEST could also be problematic for comparing satisfaction in patients and caregivers.

Suggested Citation

  • Antonio Caronni & Marina Ramella & Pietro Arcuri & Claudia Salatino & Lucia Pigini & Maurizio Saruggia & Chiara Folini & Stefano Scarano & Rosa Maria Converti, 2023. "The Rasch Analysis Shows Poor Construct Validity and Low Reliability of the Quebec User Evaluation of Satisfaction with Assistive Technology 2.0 (QUEST 2.0) Questionnaire," IJERPH, MDPI, vol. 20(2), pages 1-19, January.
  • Handle: RePEc:gam:jijerp:v:20:y:2023:i:2:p:1036-:d:1027098
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1660-4601/20/2/1036/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1660-4601/20/2/1036/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Geoff Masters, 1982. "A rasch model for partial credit scoring," Psychometrika, Springer;The Psychometric Society, vol. 47(2), pages 149-174, June.
    2. De Boeck, Paul & Bakker, Marjan & Zwitser, Robert & Nivard, Michel & Hofman, Abe & Tuerlinckx, Francis & Partchev, Ivailo, 2011. "The Estimation of Item Response Models with the lmer Function from the lme4 Package in R," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 39(i12).
    3. Paul Boeck, 2008. "Random Item IRT Models," Psychometrika, Springer;The Psychometric Society, vol. 73(4), pages 533-559, December.
    4. David Andrich, 1978. "A rating formulation for ordered response categories," Psychometrika, Springer;The Psychometric Society, vol. 43(4), pages 561-573, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. P. A. Ferrari & S. Salini, 2008. "Measuring Service Quality: The Opinion of Europeans about Utilities," Working Papers 2008.36, Fondazione Eni Enrico Mattei.
    2. Chang, Hsin-Li & Yang, Cheng-Hua, 2008. "Explore airlines’ brand niches through measuring passengers’ repurchase motivation—an application of Rasch measurement," Journal of Air Transport Management, Elsevier, vol. 14(3), pages 105-112.
    3. Ivana Bassi & Matteo Carzedda & Enrico Gori & Luca Iseppi, 2022. "Rasch analysis of consumer attitudes towards the mountain product label," Agricultural and Food Economics, Springer;Italian Society of Agricultural Economics (SIDEA), vol. 10(1), pages 1-25, December.
    4. Hua-Hua Chang, 1996. "The asymptotic posterior normality of the latent trait for polytomous IRT models," Psychometrika, Springer;The Psychometric Society, vol. 61(3), pages 445-463, September.
    5. Curt Hagquist & Raili Välimaa & Nina Simonsen & Sakari Suominen, 2017. "Differential Item Functioning in Trend Analyses of Adolescent Mental Health – Illustrative Examples Using HBSC-Data from Finland," Child Indicators Research, Springer;The International Society of Child Indicators (ISCI), vol. 10(3), pages 673-691, September.
    6. Salzberger, Thomas & Newton, Fiona J. & Ewing, Michael T., 2014. "Detecting gender item bias and differential manifest response behavior: A Rasch-based solution," Journal of Business Research, Elsevier, vol. 67(4), pages 598-607.
    7. Rasmus A. X. Persson, 2023. "Theoretical evaluation of partial credit scoring of the multiple-choice test item," METRON, Springer;Sapienza Università di Roma, vol. 81(2), pages 143-161, August.
    8. Chang, Hsin-Li & Wu, Shun-Cheng, 2008. "Exploring the vehicle dependence behind mode choice: Evidence of motorcycle dependence in Taipei," Transportation Research Part A: Policy and Practice, Elsevier, vol. 42(2), pages 307-320, February.
    9. Genge, Ewa & Bartolucci, Francesco, 2019. "Are attitudes towards immigration changing in Europe? An analysis based on bidimensional latent class IRT models," MPRA Paper 94672, University Library of Munich, Germany.
    10. Joshua B. Gilbert & James S. Kim & Luke W. Miratrix, 2023. "Modeling Item-Level Heterogeneous Treatment Effects With the Explanatory Item Response Model: Leveraging Large-Scale Online Assessments to Pinpoint the Impact of Educational Interventions," Journal of Educational and Behavioral Statistics, , vol. 48(6), pages 889-913, December.
    11. Jesper Tijmstra & Maria Bolsinova, 2019. "Bayes Factors for Evaluating Latent Monotonicity in Polytomous Item Response Theory Models," Psychometrika, Springer;The Psychometric Society, vol. 84(3), pages 846-869, September.
    12. Salzberger, Thomas & Koller, Monika, 2013. "Towards a new paradigm of measurement in marketing," Journal of Business Research, Elsevier, vol. 66(9), pages 1307-1317.
    13. Richard N McNeely & Salissou Moutari & Samuel Arba-Mosquera & Shwetabh Verma & Jonathan E Moore, 2018. "An alternative application of Rasch analysis to assess data from ophthalmic patient-reported outcome instruments," PLOS ONE, Public Library of Science, vol. 13(6), pages 1-32, June.
    14. Francesca DE BATTISTI & Giovanna NICOLINI & Silvia SALINI, 2008. "Methodological overview of Rasch model and application in customer satisfaction survey data," Departmental Working Papers 2008-04, Department of Economics, Management and Quantitative Methods at Università degli Studi di Milano.
    15. Kuan-Yu Jin & Yi-Jhen Wu & Hui-Fang Chen, 2022. "A New Multiprocess IRT Model With Ideal Points for Likert-Type Items," Journal of Educational and Behavioral Statistics, , vol. 47(3), pages 297-321, June.
    16. van der Ark, L. Andries, 2012. "New Developments in Mokken Scale Analysis in R," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 48(i05).
    17. Piotr Tarka, 2013. "Model of latent profile factor analysis for ordered categorical data," Statistics in Transition new series, Główny Urząd Statystyczny (Polska), vol. 14(1), pages 171-182, March.
    18. Xiaohui Zheng & Sophia Rabe-Hesketh, 2007. "Estimating parameters of dichotomous and ordinal item response models with gllamm," Stata Journal, StataCorp LP, vol. 7(3), pages 313-333, September.
    19. Timo Bechger & Gunter Maris, 2015. "A Statistical Test for Differential Item Pair Functioning," Psychometrika, Springer;The Psychometric Society, vol. 80(2), pages 317-340, June.
    20. Lai-Fa Hung & Wen-Chung Wang, 2012. "The Generalized Multilevel Facets Model for Longitudinal Data," Journal of Educational and Behavioral Statistics, , vol. 37(2), pages 231-255, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jijerp:v:20:y:2023:i:2:p:1036-:d:1027098. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.