IDEAS home Printed from https://ideas.repec.org/a/taf/oaefxx/v4y2016i1p1127746.html
   My bibliography  Save this article

The relationship between faculty characteristics and the use of norm- and criteria-based grading

Author

Listed:
  • John Robst
  • Jennifer VanGilder

Abstract

Norm-based grading has been associated with a reduction in student incentives to learn. Thus, it is important to understand faculty incentives for using norm-based grading. This paper used two waves of the National Study of Postsecondary Faculty to examine faculty characteristics related to the use of norm-based grading. Results suggest that norm-based grading is more likely when faculty and departments are more research oriented. Faculty who are at lower rank, male, younger, in science and social science departments are more likely to use norm-based grading, while faculty who feel that teaching should be the primary promotion criterion use criteria-based grading.

Suggested Citation

  • John Robst & Jennifer VanGilder, 2016. "The relationship between faculty characteristics and the use of norm- and criteria-based grading," Cogent Economics & Finance, Taylor & Francis Journals, vol. 4(1), pages 1127746-112, December.
  • Handle: RePEc:taf:oaefxx:v:4:y:2016:i:1:p:1127746
    DOI: 10.1080/23322039.2015.1127746
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/23322039.2015.1127746
    Download Restriction: Access to full text is restricted to subscribers.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Bishop, John H, 1997. "The Effect of National Standards and Curriculum-Based Exams on Achievement," American Economic Review, American Economic Association, vol. 87(2), pages 260-264, May.
    2. Krautmann, Anthony C. & Sander, William, 1999. "Grades and student evaluations of teachers," Economics of Education Review, Elsevier, vol. 18(1), pages 59-63, February.
    3. Mason, Paul M. & Steagall, Jeffrey W. & Fabritius, Michael M., 2003. "The changing quality of business education," Economics of Education Review, Elsevier, vol. 22(6), pages 603-609, December.
    4. Donna K. Ginther & Shulamit Kahn, 2004. "Women in Economics: Moving Up or Falling Off the Academic Career Ladder?," Journal of Economic Perspectives, American Economic Association, vol. 18(3), pages 193-214, Summer.
    5. Becker, William E. & Rosen, Sherwin, 1992. "The learning effect of assessment and evaluation in high school," Economics of Education Review, Elsevier, vol. 11(2), pages 107-118, June.
    6. Donald G. Freeman, 1999. "Grade Divergence as a Market Outcome," The Journal of Economic Education, Taylor & Francis Journals, vol. 30(4), pages 344-351, December.
    7. Michael A McPherson & R Todd Jewell & Myungsup Kim, 2009. "What Determines Student Evaluation Scores? A Random Effects Analysis of Undergraduate Economics Classes," Eastern Economic Journal, Palgrave Macmillan;Eastern Economic Association, vol. 35(1), pages 37-51.
    8. Bishop, J., 1997. "The Effect of national Standards and Curriculum-Based Exams on Achievement," Papers 97-01, Cornell - Center for Advanced Human Resource Studies.
    9. Pedro Landeras, 2009. "Student effort: standards vs. tournaments," Applied Economics Letters, Taylor & Francis Journals, vol. 16(9), pages 965-969.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Schwerdt, Guido & Woessmann, Ludger, 2017. "The information value of central school exams," Economics of Education Review, Elsevier, vol. 56(C), pages 65-79.
    2. Oliver Himmler & Robert Schwager, 2013. "Double Standards in Educational Standards – Do Schools with a Disadvantaged Student Body Grade More Leniently?," German Economic Review, Verein für Socialpolitik, vol. 14(2), pages 166-189, May.
    3. Sander Gerritsen & Dinand Webbink, 2013. "How much do children learn in school? International evidence from school entry rules," CPB Discussion Paper 255.rdf, CPB Netherlands Bureau for Economic Policy Analysis.
    4. John Bishop & Ludger Wossmann, 2004. "Institutional Effects in a Simple Model of Educational Production," Education Economics, Taylor & Francis Journals, vol. 12(1), pages 17-38.
    5. Miroslava Federicova, 2014. "The Impact of High-Stakes School-Admission Exams on Study Effort and Achievements: Quasi-experimental Evidence from Slovakia," Investigaciones de Economía de la Educación volume 9, in: Adela García Aracil & Isabel Neira Gómez (ed.), Investigaciones de Economía de la Educación 9, edition 1, volume 9, chapter 27, pages 515-532, Asociación de Economía de la Educación.
    6. Maria De Paola & Vincenzo Scoppa, 2008. "A signalling model of school grades: centralized versus decentralized examinations," Economics of Education Working Paper Series 0025, University of Zurich, Department of Business Administration (IBW).
    7. Piopiunik, Marc & Hanushek, Eric A. & Wiederhold, Simon, 2014. "The Impact of Teacher Skills on Student Performance across Countries," VfS Annual Conference 2014 (Hamburg): Evidence-based Economic Policy 100356, Verein für Socialpolitik / German Economic Association.
    8. Puhani, Patrick A. & Yang, Philip, 2020. "Does increased teacher accountability decrease leniency in grading?," Journal of Economic Behavior & Organization, Elsevier, vol. 171(C), pages 333-341.
    9. Maarten Cornet & Free Huizinga & Bert Minne & Dinand Webbink, 2006. "Successful knowledge policies," CPB Memorandum 158, CPB Netherlands Bureau for Economic Policy Analysis.
    10. Bergbauer, Annika B. & Hanushek, Eric A. & Woessmann, Ludger, 2018. "Testing," IZA Discussion Papers 11683, Institute of Labor Economics (IZA).
    11. Anne Boring, 2015. "Gender Biases in student evaluations of teachers," Documents de Travail de l'OFCE 2015-13, Observatoire Francais des Conjonctures Economiques (OFCE).
    12. Brindusa Anghel & Antonio Cabrales & Jorge Sainz & Ismael Sanz, 2015. "Publicizing the results of standardized external tests: does it have an effect on school outcomes?," IZA Journal of European Labor Studies, Springer;Forschungsinstitut zur Zukunft der Arbeit GmbH (IZA), vol. 4(1), pages 1-20, December.
    13. Giorgio Brunello & Lorenzo Rocco, 2008. "Educational Standards in Private and Public Schools," Economic Journal, Royal Economic Society, vol. 118(533), pages 1866-1887, November.
    14. Katherine Caves & Simone Balestra, 2018. "The impact of high school exit exams on graduation rates and achievement," The Journal of Educational Research, Taylor & Francis Journals, vol. 111(2), pages 186-200, March.
    15. West, Martin R. & Woessmann, Ludger, 2006. "Which school systems sort weaker students into smaller classes? International evidence," European Journal of Political Economy, Elsevier, vol. 22(4), pages 944-968, December.
    16. Felix Büchel & Hendrik Jürges & Kerstin Schneider, 2003. "Die Auswirkungen zentraler Abschlussprüfungen auf die Schulleistung: quasi-experimentelle Befunde aus der deutschen TIMSS-Stichprobe," Vierteljahrshefte zur Wirtschaftsforschung / Quarterly Journal of Economic Research, DIW Berlin, German Institute for Economic Research, vol. 72(2), pages 238-251.
    17. Hanushek, Eric A. & Woessmann, Ludger, 2011. "Sample selectivity and the validity of international student achievement tests in economic research," Economics Letters, Elsevier, vol. 110(2), pages 79-82, February.
    18. Zakharov, Andrey & Carnoy, Martin, 2015. "Are teachers accurate in predicting their students’ performance on high stakes’ exams? The case of Russia," International Journal of Educational Development, Elsevier, vol. 43(C), pages 1-11.
    19. Ali Palali & Roel van Elk & Jonneke Bolhaar & Iryna Rud, 2017. "Are good researchers also good teachers? The relationship between research quality and teaching quality," CPB Discussion Paper 347, CPB Netherlands Bureau for Economic Policy Analysis.
    20. Ewing, Andrew M., 2012. "Estimating the impact of relative expected grade on student evaluations of teachers," Economics of Education Review, Elsevier, vol. 31(1), pages 141-154.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:oaefxx:v:4:y:2016:i:1:p:1127746. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Chris Longhurst). General contact details of provider: http://www.tandfonline.com/OAEF20 .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.