Assessing research excellence: Evaluating the Research Excellence Framework

Mehmet Pinar, Timothy J Horne, Assessing research excellence: Evaluating the Research Excellence Framework, Research Evaluation, Volume 31, Issue 2, April 2022, Pages 173–187, https://doi.org/10.1093/reseval/rvab042

Abstract

Performance-based research funding systems have been extensively used around the globe to allocate funds across higher education institutes (HEIs), which led to an increased amount of literature examining their use. The UK’s Research Excellence Framework (REF) uses a peer-review process to evaluate the research environment, research outputs and non-academic impact of research produced by HEIs to produce a more accountable distribution of public funds. However, carrying out such a research evaluation is costly. Given the cost and that it is suggested that the evaluation of each component is subject to bias and has received other criticisms, this article uses correlation and principal component analysis to evaluate REF’s usefulness as a composite evaluation index. As the three elements of the evaluation—environment, impact and output—are highly and positively correlated, the effect of the removal of an element from the evaluation leads to relatively small shifts in the allocation of funds and in the rankings of HEIs. As a result, future evaluations may consider the removal of some elements of the REF or reconsider a new way of evaluating different elements to capture organizational achievement rather than individual achievements.

For the uber impact geek this is an assessment of the UK’s Research Excellence Framework that distributes a ton of money annually to universities based on their submissions of data and narratives on quality of research outputs, societal impacts of research and the research environment.

There is a good history of the evolution of research assessments in the UK starting in 1986 and ending with REF 2021. Societal impact was introduced in the 2014 exercise. REF submissions are reviewed by peer review panels assigning scores from 1* (meh) to 4* (amazing). Funding is allocated based on an institution’s performance in the assessment exercise.

The literature review illustrates the many critiques of the REF as a tool to distribute performance-based funding. It cites challenges of bias among reviewers, a “halo effect” privileging universities with established research track records, and the cost and the effort required. The “estimated cost of the exercise to be £246 million, and the cost of preparing the REF submissions was £212 million. It can be estimated that roughly £19–27 million was spent preparing the research environment statements, and £55 million was spent in preparation of impact case studies, and the remainder cost of preparation may be associated with the output submission”.

Furthermore, based on previous research on statistics, the authors illustrate that in multi-dimensional indices where the different elements are highly correlated then removing one element does not substantively change the overall assessment.

After a bunch of math, numbers and statistics, they illustrate that removing one element (output, impact, or environment) would reduce effort and cost and not significantly change the overall ranking or funding to universities. “We found that the exclusion of one element from the REF or using equal weights would have benefited (disadvantaged) some HEIs, but at most £46.7 million (out of over £1 billion) would have been reallocated between HEIs when the output element is excluded from the evaluation.”

This 4.67% change doesn’t sound like much when presented as a statistic but if you were a university that was losing out on funding or ranking, even by a little, this would likely mean a lot.

The article closes by prompting some thinking about the aim of REF. “Based on the definition of organizational evaluation, impact and output evaluations of the REF are based on the achievements of individuals, and if the aim is to evaluate the organizations, then evaluation of the impact and output elements, which are in essence individual achievements, could be removed, and their removal from the evaluation will not result in significant effects as found in this article. Therefore, if the REF aims to evaluate the organizational performance, the choice of the components should be further motivated by and rely on the metrics that evaluate the organization rather than the individual achievements.”

Questions for brokers:

  1. Does it make sense to spend £246 million to get £1 billion?
  2. What would you do: leave REF as it is or remove one element? If the latter which element would you remove: research outputs, impact or environment?
  3. Some provinces like Ontario consider research performance (share of tri-council funding, industry funding) when funding university operations. Will Canada become a REF-like jurisdiction?

Research Impact Canada is producing this journal club series to make evidence on KMb more accessible to knowledge brokers and to create online discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.