Morgan, B. (2014). Research impact: Income for outcome. Nature, 5 (11), S72-S75. doi:10.1038/511S72a. http://www.nature.com/nature/journal/v511/n7510_supp/pdf/511S72a.pdf.
No abstract, article attached below
Thanks to @KMbeing for finding and sharing this paper with me.
This short paper provides a snapshot of some efforts to measure (or at least monitor) the extra academic impacts of research in New Zealand and Australia. The paper starts out with some fairly philosophical thoughts about impact. « Determining the impact of research on wider society has the potential to assist decision-makers within organizations and institutions. But what is troubling people like Gluckman are the definitions. « You have to be really clear about the word: there are many different kinds of impact and perceptions differ, » he says. « Governments have to decide what impacts they are looking for. » Questions surround what constitutes impact and at what point during or after the research process it should be evaluated ».
Then the paper asks an important question: « Can something that is subjective and qualitative ever be appropriately measured? »
This question continues to fuel the angst of our knowledge mobilization communities. Can we ever really measure impact? « No » is not the right answer. If we can’t then we will never be funded nor be able to articulate the various returns on investment in knowledge mobilization services.
The paper references impact « taxonomies » as well as a group in New Zealand mapping impacts along « financial, social, environmental, public policy and capability » to represent the full range of potential impacts of research. Citing Adam Jaffe, director of New Zealand’s Motu Economic and Public Policy Research think tank, measuring impact is like assessing the performance of a baseball team. « The fact that sometimes you strike out and sometimes you do well doesn’t stop us from thinking about who is better on average, » he says. « We can look at which models on average generate the greatest outcomes and impacts across a number of different measures. »In this regard impact is measured at the project level but is aggregated up into an overall picture of the impact of a portfolio of research.
In Australia the Commonwealth Scientific and Research Organisation (CSIRO) is the largest of the Australian government’s portfolio-funded research agencies and formally and transparently plans, monitors and evaluates the impact of its research. « CSIRO uses an impact pathway model that describes a project’s inputs, activities, outputs, expected outcomes and eventually impact – for example, the adoption of new research protocols that improve productivity »
Interestingly this aligns with some recent thinking about the flow of research to impact from Alberta Innovates Health Solutions, Centre for Research on Families and Relationships (Edinburgh)(see research uptake, use and impact) and from our own work with some of Canada’s Networks of Centres of Excellence (see figure from slide 12).
What is interesting about this convergent evolution is that we seem to be getting comfortable with the linearity of these pathways. Our community has long ago abandoned linear models of knowledge mobilization. But that refers to the non-linearity in a single knowledge mobilization project; however, when working in a system of knowledge mobilization the portfolio of projects does move towards impact. And this movement is linear from research to impact.
CSIRO has conducted « tens of thousands of projects » and they map 286 of them on a map of impacts from 1979 to impacts projected to accrue in 2034. On the third page of this article there is an interesting visualization of impact of these 286 project but what I find more interesting is what isn’t made explicit. If we guess « tens of thousands of projects » is 50,000 (halfway between 1,000 and 100,000) then 268 having even projected impact to 2034 is an impact rate of 5.7%. In this very large portfolio of research projects somewhere about 6% of them are projected to have impact.
This is a pure guess on my part but it does illustrate that in a portfolio we don’t expect every project to perform equally well (remember the baseball analogy above). This again calls into question the utility of measuring the impact of a single research project. Rather this suggests that impact should be measured on the portfolio or systems level.
The paper then reflects on impact in the higher education (or tertiary) sector. The UK is part way through their Research Excellence Framework where universities have to not only report on the quality of their research outputs but on the extra academic impacts of that research. Some Australian institutions are undertaking impact assessments even in the absence of public policy directives to do so.
But what this paper doesn’t do…and maybe is beyond the scope….is to discuss what metrics and indicators these leading practices use to evaluate impact. What do you count? Whose stories do you tell? There we need more work to help come to a greater shared understanding and a better practice than we currently have.
This was the subject of a report on an evaluation of SSHRC’s Connections programs. This evaluation looked at SSHRC suite of Connections (=knowledge mobilization) funding programs and sought to identify the best practices for articulating the impacts of the projects. Bottom line….don’t ask the researcher what extra academic impacts occurred. Ask the research partners because that is where impact is expressed. Our researchers don’t make and sell products – our industry partners do. Our researchers don’t make policy – our government partners do. Our researchers don’t (usually) deliver social services – our community partners do. If you want your research to have economic, social or environmental impacts then you need to work with partners as they are the ones who will make the products, policies and services that have an impact on the lives of citizens (or on the lives of fish…if you’re an ichthyologist!).
Questions for brokers
- If impact is best measured at the aggregate or portfolio or systems level then how useful is it to measure the impact of one research project? What if that one project is your project? Does that make a difference?
- Should a university (or Faculty, Department or research unit) evaluate the extra academic impacts of its research? What are the risks and benefits of doing this (to faculty, students, partners and to the institution itself)?
- What metrics, indicators or methodologies do you use (or avoid using) when measuring impact?
ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.