Systematic Influences on Teaching Evaluations : The Case for Caution
The evaluation of teaching and learning has become an important activity in tertiary education institutions. Student surveys provide information about student perceptions and judgments of a particular subject. However, as is widely recognised, the appropriate interpretation of this data is problematic. There is a large literature, mainly for the US, on the use and usefulness of student subject evaluations. This literature has highlighted a number of ‘mitigating factors’ such as subject difficulty, discipline area, etc., that should be taken into account in interpreting the results of these questionnaires. In this paper we examine 8 years of QOT responses from an Economics Department in an Australian University which accounted for more than 79,000 student subject enrolments in 565 subjects. The purpose of this analysis is to establish how the information contained in these data can be used to interpret the responses. In particular, we determine to what extent other factors besides the instructor in charge of the subject have an impact on the raw average student evaluation scores. We find that the following characteristics of the students in these classes had an influence on the average QOT score: year level, enrolment size, the quantitative nature of the subject, the country of origin of the students, the proportion that are female, Honours status of the student, the differential in their mark from previous marks, quality of workbook, quality of textbook and the relative QOT score versus other subjects taught at the same time. However, a number of other factors proposed in the literature to be important influences were found not to be. These include the student’s fee paying status, whether they attended a public, private or catholic secondary school, which other faculty within the University they came from, and if the subject was taught in multiple sessions.
|Date of creation:||2005|
|Date of revision:|
|Contact details of provider:|| Postal: Department of Economics, The University of Melbourne, 4th Floor, FBE Building, Level 4, 111 Barry Street. Victoria, 3010, Australia|
Phone: +61 3 8344 5355
Fax: +61 3 8344 6899
Web page: http://fbe.unimelb.edu.au/economics
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- William Bosshardt & Michael Watts, 2001. "Comparing Student and Instructor Evaluations of Teaching," The Journal of Economic Education, Taylor & Francis Journals, vol. 32(1), pages 3-17, January.
- L. F. Jameson Boex, 2000. "Attributes of Effective Economics Instructors: An Analysis of Student Evaluations," The Journal of Economic Education, Taylor & Francis Journals, vol. 31(3), pages 211-227, September.
- White, Halbert, 1980. "A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity," Econometrica, Econometric Society, vol. 48(4), pages 817-38, May.
- Mason, Paul M. & Steagall, Jeffrey W. & Fabritius, Michael M., 1995. "Student evaluations of faculty: A new procedure for using aggregate measures of performance," Economics of Education Review, Elsevier, vol. 14(4), pages 403-416, December.
- Omer Gokcekus, 2000. "How do university students value economics courses? A hedonic approach," Applied Economics Letters, Taylor & Francis Journals, vol. 7(8), pages 493-496.
When requesting a correction, please mention this item's handle: RePEc:mlb:wpaper:953. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Katherine Perez)
If references are entirely missing, you can add them using this form.