IDEAS home Printed from https://ideas.repec.org/a/spr/soinre/v146y2019i1d10.1007_s11205-018-02055-y.html
   My bibliography  Save this article

How Reliable are Students’ Evaluations of Teaching (SETs)? A Study to Test Student’s Reproducibility and Repeatability

Author

Listed:
  • Amalia Vanacore

    (University of Naples “Federico II”)

  • Maria Sole Pellegrino

    (University of Naples “Federico II”)

Abstract

Students’ Evaluations of Teaching (SETs) are widely used as measures of teaching quality in Higher Education. A review of specialized literature evidences that researchers widely discuss whether SETs can be considered reliable measures of teaching quality evaluation. Though the controversy mainly refers to the role of students as assessors of teaching quality, most of research studies on SETs focus on the design and validation of the evaluation procedure and even when the need of measuring SETs reliability is recognized, it is generally indirectly assessed for the whole group of students by measuring inter-student agreement. In this paper the focus is on the direct assessment of the reliability of each student as a measurement instrument of teaching quality. An agreement-based approach is here adopted in order to assess student’s ability to provide consistent and stable evaluations; the sampling uncertainty is accounted for by building non-parametric bootstrap confidence intervals for the adopted agreement coefficients.

Suggested Citation

  • Amalia Vanacore & Maria Sole Pellegrino, 2019. "How Reliable are Students’ Evaluations of Teaching (SETs)? A Study to Test Student’s Reproducibility and Repeatability," Social Indicators Research: An International and Interdisciplinary Journal for Quality-of-Life Measurement, Springer, vol. 146(1), pages 77-89, November.
  • Handle: RePEc:spr:soinre:v:146:y:2019:i:1:d:10.1007_s11205-018-02055-y
    DOI: 10.1007/s11205-018-02055-y
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s11205-018-02055-y
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s11205-018-02055-y?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Anthony Onwuegbuzie & Larry Daniel & Kathleen Collins, 2009. "A meta-validation model for assessing the score-validity of student teaching evaluations," Quality & Quantity: International Journal of Methodology, Springer, vol. 43(2), pages 197-209, March.
    2. Duane Alwin, 1989. "Problems in the estimation and interpretation of the reliability of survey data," Quality & Quantity: International Journal of Methodology, Springer, vol. 23(3), pages 277-331, September.
    3. Samer Kherfi, 2011. "Whose Opinion Is It Anyway? Determinants of Participation in Student Evaluation of Teaching," The Journal of Economic Education, Taylor & Francis Journals, vol. 42(1), pages 19-30, January.
    4. Martin Davies & Joe Hirschberg & Jenny Lye & Carol Johnston & Ian Mcdonald, 2007. "Systematic Influences On Teaching Evaluations: The Case For Caution," Australian Economic Papers, Wiley Blackwell, vol. 46(1), pages 18-38, March.
    5. Mónica Martínez-Gómez & Jose Sierra & José Jabaloyes & Manuel Zarzo, 2011. "A multivariate method for analyzing and improving the use of student evaluation of teaching questionnaires: a case study," Quality & Quantity: International Journal of Methodology, Springer, vol. 45(6), pages 1415-1427, October.
    6. Maarten Goos & Anna Salomons, 2017. "Measuring teaching quality in higher education: assessing selection bias in course evaluations," Research in Higher Education, Springer;Association for Institutional Research, vol. 58(4), pages 341-364, June.
    7. Michelle Lalla & Gisella Facchinetti & Giovanni Mastroleo, 2005. "Ordinal scales and fuzzy set systems to measure agreement: An application to the evaluation of teaching activity," Quality & Quantity: International Journal of Methodology, Springer, vol. 38(5), pages 577-601, January.
    8. de Mast, Jeroen, 2007. "Agreement and Kappa-Type Indices," The American Statistician, American Statistical Association, vol. 61, pages 148-153, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. María del Carmen Olmos-Gómez & Mónica Luque-Suárez & Concetta Ferrara & Jesús Manuel Cuevas-Rincón, 2020. "Analysis of Psychometric Properties of the Quality and Satisfaction Questionnaire Focused on Sustainability in Higher Education," Sustainability, MDPI, vol. 12(19), pages 1-16, October.
    2. Luis Matosas-López & Cesar Bernal-Bravo & Alberto Romero-Ania & Irene Palomero-Ilardia, 2019. "Quality Control Systems in Higher Education Supported by the Use of Mobile Messaging Services," Sustainability, MDPI, vol. 11(21), pages 1-14, October.
    3. Solmaz Ghaffarian Asl & Necdet Osam, 2021. "A Study of Teacher Performance in English for Academic Purposes Course: Evaluating Efficiency," SAGE Open, , vol. 11(4), pages 21582440211, October.
    4. Cannon, Edmund & Cipriani, Giam Pietro, 2021. "Gender Differences in Student Evaluations of Teaching: Identification and Consequences," IZA Discussion Papers 14387, Institute of Labor Economics (IZA).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Angelo Antoci & Irene Brunetti & Pierluigi Sacco & Mauro Sodini, 2021. "Student evaluation of teaching, social influence dynamics, and teachers’ choices: An evolutionary model," Journal of Evolutionary Economics, Springer, vol. 31(1), pages 325-348, January.
    2. Neckermann, Susanne & Turmunkh, Uyanga & van Dolder, Dennie & Wang, Tong V., 2022. "Nudging student participation in online evaluations of teaching: Evidence from a field experiment," European Economic Review, Elsevier, vol. 141(C).
    3. Cannon, Edmund & Cipriani, Giam Pietro, 2021. "Gender Differences in Student Evaluations of Teaching: Identification and Consequences," IZA Discussion Papers 14387, Institute of Labor Economics (IZA).
    4. Matthijs Warrens, 2010. "Inequalities Between Kappa and Kappa-Like Statistics for k×k Tables," Psychometrika, Springer;The Psychometric Society, vol. 75(1), pages 176-185, March.
    5. Cernat, Alexandru & Watson, Nicole & Lugtig, Peter & Noah Uhrig, S.C., 2014. "Assessing and relaxing assumptions in quasi-simplex models," ISER Working Paper Series 2014-09, Institute for Social and Economic Research.
    6. Matthijs Warrens, 2010. "A Formal Proof of a Paradox Associated with Cohen’s Kappa," Journal of Classification, Springer;The Classification Society, vol. 27(3), pages 322-332, November.
    7. Matthijs J. Warrens, 2014. "Power Weighted Versions of Bennett, Alpert, and Goldstein’s," Journal of Mathematics, Hindawi, vol. 2014, pages 1-9, December.
    8. Ale J. Hejase & Hussin J. Hejase & Rana S. Al Kaakour, 2014. "The Impact of Students’ Characteristics on their Perceptions of the Evaluation of Teaching Process," International Journal of Management Sciences, Research Academy of Social Sciences, vol. 4(2), pages 90-105.
    9. José M. Ramírez-Hurtado & Alfredo G. Hernández-Díaz & Ana D. López-Sánchez & Víctor E. Pérez-León, 2021. "Measuring Online Teaching Service Quality in Higher Education in the COVID-19 Environment," IJERPH, MDPI, vol. 18(5), pages 1-14, March.
    10. Matthijs Warrens, 2010. "Inequalities between multi-rater kappas," Advances in Data Analysis and Classification, Springer;German Classification Society - Gesellschaft für Klassifikation (GfKl);Japanese Classification Society (JCS);Classification and Data Analysis Group of the Italian Statistical Society (CLADAG);International Federation of Classification Societies (IFCS), vol. 4(4), pages 271-286, December.
    11. Bonaccolto-Töpfer, Marina & Castagnetti, Carolina, 2021. "The COVID-19 pandemic: A threat to higher education?," Discussion Papers 117, Friedrich-Alexander University Erlangen-Nuremberg, Chair of Labour and Regional Economics.
    12. Marco Taliento, 2022. "The Triple Mission of the Modern University: Component Interplay and Performance Analysis from Italy," World, MDPI, vol. 3(3), pages 1-24, July.
    13. Duane F. Alwin, 1997. "Feeling Thermometers Versus 7-Point Scales," Sociological Methods & Research, , vol. 25(3), pages 318-340, February.
    14. Michele Lalla, 2017. "Fundamental characteristics and statistical analysis of ordinal variables: a review," Quality & Quantity: International Journal of Methodology, Springer, vol. 51(1), pages 435-458, January.
    15. McKEE J. McCLENDON & DUANE F. ALWIN, 1993. "No-Opinion Filters and Attitude Measurement Reliability," Sociological Methods & Research, , vol. 21(4), pages 438-464, May.
    16. Pierpaolo D’Urso & Livia Giovanni & Marta Disegna & Riccardo Massari & Vincenzina Vitale, 2021. "A Tourist Segmentation Based on Motivation, Satisfaction and Prior Knowledge with a Socio-Economic Profiling: A Clustering Approach with Mixed Information," Social Indicators Research: An International and Interdisciplinary Journal for Quality-of-Life Measurement, Springer, vol. 154(1), pages 335-360, February.
    17. De Witte, Kristof & Rogge, Nicky, 2011. "Accounting for exogenous influences in performance evaluations of teachers," Economics of Education Review, Elsevier, vol. 30(4), pages 641-653, August.
    18. Niccolò Cao & Antonio Calcagnì, 2022. "Jointly Modeling Rating Responses and Times with Fuzzy Numbers: An Application to Psychometric Data," Mathematics, MDPI, vol. 10(7), pages 1-11, March.
    19. Rieger, Matthias & Voorvelt, Katherine, 2016. "Gender, ethnicity and teaching evaluations: Evidence from mixed teaching teamsAuthor-Name: Wagner, Natascha," Economics of Education Review, Elsevier, vol. 54(C), pages 79-94.
    20. Duane F. Alwin & Jon A. Krosnick, 1991. "The Reliability of Survey Attitude Measurement," Sociological Methods & Research, , vol. 20(1), pages 139-181, August.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:soinre:v:146:y:2019:i:1:d:10.1007_s11205-018-02055-y. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.