Digital Science (2016). The Societal and Economic Impacts of Academic Research: International perspectives on good practice and managing evidence. Report available from https://www.digital-science.com/resources/digital-research-reports/digital-research-report-societal-economic-impacts-academic-research/
Introduction
This report was created to support a workshop in London in March 2016, supported by the Higher Education Funding Council for England (HEFCE). The objectives are to encourage researchers across all disciplines to reflect on what socio-economic research impact means for their areas of interest and what types of evidence best reflect achievement.
When impact case studies were added to the UK’s 2014 Research Excellence Framework they created a new way of looking at what research delivers. This has proven remarkably amenable and incredibly revealing. There is a flavour to research outcomes that analytical indicators can never provide. But this was the first time such an exercise had been tried across all subjects in all universities in one cycle. One very flexible template fitted all. Now, with this experience, we have an opportunity to reflect on what worked and what could be improved.
Disciplinary communities must reflect on what they believe culturally constitutes proper, acceptable and appropriate evidence of economic, social or other impact and what constitutes strong or weak levels of achievement. It seems unlikely that broad-based sciences and arts will conceptualise impact, evidence of impact and assessment of impact in the same ways. There may also be divergence between professionally-focussed areas, like social policy, and their background academic disciplines, like sociology. And, whereas citation impact is used in the same way across continents, does the cultural construction of research impact allow it to become a global comparator?
This report is a reflection on the UK Research Excellence Framework (REF) and introduces some international perspectives on impact. It provides several useful conclusions about impact assessment and systems of impact assessment. It lacks a critical or empirical research lens but the people and organizations providing the commentary are recognized leaders in their fields. We can learn much form these leaders as Canada and other countries start to think about impact assessment.
Impact assessment is hard and expensive but worthwhile. The impact case study portion of the REF cost £55M but drove the allocation of over £1.6B in investment in higher education. This is a 3.4% transaction cost which is much less than the 13% transaction costs for grant writing.
There were four types of evidence used to substantiate claims of impact: stakeholder information, testimonials, on line traffic, positions of responsibility. Figure 1 breaks these down into more granular forms of evidence (i.e. letters of support and focus groups as two of seven types of testimonial evidence). Testimonials were most frequently used evidence type for the Physical Sciences & Engineering, Social Sciences, as well as the Arts and Humanities assessment panels with the Biological Sciences & Medicine panel using reports more frequently.
The report advocates for evidence collection during the conduct of the research, not just retrospectively. However, this requires impact to be planned (ex ante) rather than waiting until after the project has finished (ex post). This also requires some system to collect, gather and store the different forms of evidence of impact. Commercial products such as Research Fish and VV Impact Tracker can help with this. Likely a good electronic filing system and excel will work for a small-scale impact assessment.
Related to this the report states “Experience enabling clients to report research impact confirms that planning for impact is best done at the beginning of the research process, putting in place impact data and evidence capture as the project is being conceptualised.” See a Mobilize This! blog post about how this ex ante (planned at the beginning) impact assessment is the same thing as knowledge mobilization planning. When you plan your knowledge mobilization (i.e. impact) strategy you need to develop a theory of change (i.e. impact pathway) which will inform the indicators you need to measure along the way and at the end of the process. Knowledge mobilization planning is ex ante impact assessment.
Australia has launched their research engagement and impact pilot. While they fall short of saying engagement metrics are a proxy for impact the theory is that engagement is a necessary precursor to impact, something I wrote about more than two years ago. Engagement should not be assessed separately from impact. Engagement metrics can be measured along the pathway from research to impact but they should not be conflated with actual impact.
Questions for Brokers:
1. Discuss: You can’t have impact without engagement but you can have engagement without impact.
2. Commercial systems such as Research Fish and VV Impact Tracker are used – usually -by academic institutions to capture the evidence of impact from – usually – researchers. But the forms of evidence of impact (Figure 1) are derived from research users/partners. Why do we ask researchers to gather the evidence of impact when they aren’t the ones making the impact?
3. Impact assessment is hard but worthwhile. In Canada, we don’t (yet) have a REF like system that requires impact assessment. Why would we bother doing this if we don’t have to?
Research Impact Canada is producing this journal club series to make evidence on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.