‘Impact’ in the proposals for the UK's Research Excellence Framework: Shifting the boundaries of academic autonomy
Evaluation of university-based research already has a reasonably long tradition in the UK, but proposals to revise the framework for national evaluation aroused controversy in the academic community because they envisage assessing more explicitly than before the economic, social and cultural ‘impact’ of research as well as its scientific quality. Using data from the 2009 public consultation on the proposals for a Research Excellence Framework, this paper identifies three main lines of controversy: the threats to academic autonomy implied in the definition of expert review and the delimitation of reviewers, the scope for boundary-work in the construction of impact narratives and case studies, and the framing of knowledge translation by the stipulation that impact ‘builds on’ research. Given the behaviour-shaping effects of research evaluation, the paper demonstrates how the proposed changes could help embed impact considerations among the routine reflexive tools of university researchers and enhance rather than restrict academic autonomy at the level of research units. It also argues that the REF could constitute an important dialogical space for negotiating science–society relations in an era of increasing heteronomy between academia, state and industry. But the paper raises doubts about whether the proposed operationalisation of impact is adequate to evaluate the ways that research and knowledge translation are actually carried out.
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Linda Butler & Ian McAllister, 2009. "Metrics or Peer Review? Evaluating the 2001 UK Research Assessment Exercise in Political Science," Political Studies Review, Political Studies Association, vol. 7(1), pages 3-17.
- Katharine Barker, 2007. "The UK Research Assessment Exercise: the evolution of a national research evaluation system," Research Evaluation, Oxford University Press, vol. 16(1), pages 3-12, March.
- Claire Donovan, 2009. "Gradgrinding the Social Sciences: The Politics of Metrics of Political Science," Political Studies Review, Political Studies Association, vol. 7(1), pages 73-83.
- Grit Laudel, 2006. "Conclave in the Tower of Babel: how peers review interdisciplinary research proposals," Research Evaluation, Oxford University Press, vol. 15(1), pages 57-68, April.
- Loet Leydesdorff & Henry Etzkowitz, 1996. "Emergence of a Triple Helix of university—industry—government relations," Science and Public Policy, Oxford University Press, vol. 23(5), pages 279-286, October.
- Heinze, Thomas & Shapira, Philip & Rogers, Juan D. & Senker, Jacqueline M., 2009. "Organizational and institutional influences on creativity in scientific research," Research Policy, Elsevier, vol. 38(4), pages 610-623, May.
- Benner, Mats & Sandstrom, Ulf, 2000. "Institutionalizing the triple helix: research funding and norms in the academic system," Research Policy, Elsevier, vol. 29(2), pages 291-301, February.
- Claire Donovan, 2007. "Introduction: Future pathways for science policy and research assessment: Metrics vs peer review, quality vs impact," Science and Public Policy, Oxford University Press, vol. 34(8), pages 538-542, October.
- Linda Butler, 2007. "Assessing university research: A plea for a balanced approach," Science and Public Policy, Oxford University Press, vol. 34(8), pages 565-574, October.
When requesting a correction, please mention this item's handle: RePEc:eee:respol:v:40:y:2011:i:10:p:1369-1379. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Dana Niculescu)
If references are entirely missing, you can add them using this form.