Understanding, measuring, and encouraging public policy research impact

Williams K, Lewis JM. (2021) Understanding, measuring, and encouraging public policy research impact. Aust J Publ Admin 2021;1−11. https://doi.org/10.1111/1467-8500.12506

Abstract

Academics undertaking public policy research are committed to tackling interesting questions driven by curiosity, but they generally also want their research to have an impact on government, service delivery, or public debate. Yet our ability to capture the impact of this research is limited because impact is under‐theorised, and current systems of research impact evaluation do not allow for multiple or changing research goals. This article develops a conceptual framework for understanding, measuring, and encouraging research impact for those who seek to produce research that speaks to multiple audiences. The framework brings together message, medium, audience, engagement, impact, evaluation, and affordance within the logics of different fields. It sets out a new way of considering research goals, measurements, and incentives in an integrated way. By accounting for the logics of different fields, which encompass disciplinary, institutional, and intrinsic factors, the framework provides a new way of harnessing measurements and incentives towards fruitful learning about the contribution diverse types of public policy research can make to wider impact. Academics undertaking public policy research are committed to tackling interesting questions, and many also seek to have an impact on governments, policy, and the public. Yet the impact of policy research is not easily represented. This article presents a conceptual framework that suggests better ways to understand, measure, and encourage public policy research impact.

I know what you’re thinking. Do we really need another conceptual model? On the one hand no, we have enough models, we should be moving the conceptual to the practical and just start doing stuff based on the models. On the other hand, some new models bring some new thinking that might inform practice.

My soap box off the top: the paper refers to measuring impact. I don’t measure impact because it suggests a quantitative measurement like a measuring tape. Australia and UK are among a few countries who assess societal impact and compare one institution’s impact against another. In countries which (thankfully) lack a system wide impact assessment scheme – like Canada – I like to “collect and communicate the evidence of impact”. It’s a little word but my researchers don’t want to be assessed (I had to look up “neo liberal” when I offered to assess the impact of researchers). Rant over.

This paper integrates the following elements of the research to impact processes.

  • Message: what does the evidence say
  • Medium: how is the message conveyed
  • Audience: who is receiving the message
  • Engagement: what is the evidence the message was received
  • Effects: what happened because of the message being received (shorter term)
  • Impact: what was the societal benefit occurring (longer term)
  • Evaluation: what did you do to learn about the impact. The article suggests different methods and indicators.
  • Affordance: what motivates the parties to undertake the work

And these elements are described within the logics of different fields (= disciplines).

What this article does is argue against simplistic and linear forms of impact assessment and avoids measuring what can be measured instead of what should be measured. Collecting data on each of these elements will help write a more compelling case study and complements work of Mark Reed and colleagues describing the characteristics of strong impact case studies as reviewed in this journal club. This is the contribution of the article, in my opinion.

It makes a very quick remark that I think needs a ton of unpacking. “It also suggests that engagement should be considered as a measure of effort and success in disseminating research”.

The article does add to our understanding of the multiple processes and people contributing to research impact. It also reinforces that there are no templated approaches to impact and that impact is difficult to tie to a single published article or a single researcher.

And her is my gift to you. A tool to help collect and communicate the evidence of impact. It is available in English and French and has just been published, reference below. It provides a semi structured interview guide and gives voice to the contributions of non-academic partners/participants in the research to impact process. It also requires triangulation of the interviews with documented evidence of change. You’re welcome.

Phipps, D., Poetz, A., and Johnny, M. (2022) Demonstrating impact – considerations for collecting and communicating the evidence of impact. In, Kelly, W.B. (Ed.). (in press, 2022). The Impactful Academic: being and becoming an impactful researcher throughout your career. Emerald Publishing Limited. https://www.emerald.com/insight/publication/doi/10.1108/9781801178426

Questions for brokers:

  1. Does this change anything you do for impact, or does it change how you think about impact? Or both?
  2. Affordance is new. What is your motivation for collecting and communicating the evidence of impact?
  3. Engagement as measure of effort? Should we include engagement in impact assessments as Australia does or should we keep it out as UK does? Or should we dispense with impact assessment because of all the critiques summarized in this article and focus on assessment of engagement as it is closer to the research/researchers/institution being assessed?

Research Impact Canada is producing this journal club series to make evidence on knowledge mobilization more accessible to knowledge brokers and to facilitate discussion about research on knowledge mobilization. It is designed for knowledge brokers and other parties interested in knowledge mobilization.