Assessment, Evaluations, and Definitions of Research Impact: A Review

Penfield, T., Baker, M.J., Scoble, R. and Wykes, M.C. (2014) Assessment, evaluations, and definitions of research impact: A review. Research Evaluation, 23(1), 21-32. https://academic.oup.com/rev/article/23/1/21/2889056
Abstract
This article aims to explore what is understood by the term ‘research impact’ and to provide a comprehensive assimilation of available literature and information, drawing on global experiences to understand the potential for methods and frameworks of impact assessment being implemented for UK impact assessment. We take a more focused look at the impact component of the UK Research Excellence Framework taking place in 2014 and some of the challenges to evaluating impact and the role that systems might play in the future for capturing the links between research and impact and the requirements we have for these systems.
This article is one among many discussing systems of research impact assessment. It is a good overview of why we do impact assessment, discusses a couple of different models, points out some of the pit falls and calls for systems and tools to do this work. Looking back five years since its submission in 2013 how much progress have we made?
The paper is delightfully brief on definitions. They are important but let’s not get stuck in thinking and get on with doing. One key definition is to appreciate the distinction between academic outputs (data, articles, patents, books, performances, etc.) and the impact on end beneficiaries that results from someone using those outputs to inform products, policies and services.
The paper presents five challenges (but few answers) with impact assessment, most related to the timeframe in which impact occurs:
1. Time lag: impact takes time: If you’re setting up a system of impact make sure you leave enough time between research outputs and impact assessment (but there is no advice on how long is long enough)
2. Developmental nature of impact: Impact doesn’t just happen one day. It develops over time. If you left more time would you see more impact?
3. Attribution: research evidence is only one input into observable change. See this blog post for more information.
4. Knowledge creep: Your research evidence also isn’t static. It grows over time. Our outputs will differ depending on when you assess.
5. Gathering evidence: if we gather evidence retrospectively much will be lost because people move in and out of roles. Collecting evidence prospectively addresses this (but that requires systems – see below).
The paper then proceeds to speak about systems for capturing the evidence of impact.
The evidence of impact is resistant to quantitative indicators. They only tell part of the story. Whatever system you develop needs to “capture links between and evidence of the full pathway from research to impact, including knowledge exchange, outputs, outcomes, and interim impacts, to allow the route to impact to be traced.” This evidence is made up of indicators, narratives, surveys, testimonials and citations in grey and policy literature and is most effectively expressed in case study formats.
Even a system wide effort like the UK REF that retrospectively developed 6,679 impact case studies across all disciplines has its limitations. As we move to systems, “the transition to routine capture of impact data not only requires the development of tools and systems to help with implementation but also a cultural change to develop practices, currently undertaken by a few to be incorporated as standard behaviour among researchers and universities.” The paper also concludes that “the development of tools and systems for assisting with the impact evaluation would be very valuable”.
And yet in 2016, The Research Impact Handbook provided general advice on capturing the evidence of impact in chapter 21 but no tools even though tools were presented for event planning, impact planning and stakeholder engagement.
Questions for brokers
1. Why are you doing your assessment? Do you want to advocate for change (i.e. more funding), be held accountable to your stakeholders, allocate funding and/or understand a system of impact so you can provide advice? These 4As are adapted from the reasons for impact assessment in the paper.
2. Read the post on attribution linked above and figure out if you care about attribution.
3. It’s 2018. We have the REF in the UK, the Netherlands Standard Evaluation Protocol, the Australian Engagement and Impact Assessment Pilot and Performance Based Research Funding in New Zealand. Why haven’t we developed any tools to help with this important but admittedly difficult undertaking?
Research Impact Canada is producing this journal club series as a way to make evidence on Knowledge Mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.