The Mis‐conceptualisation of Societal Impact: Why the Swiss Approach to Societal Impact is Productive and not Inexistent

Ochsner, M. (2024). The Mis‐conceptualisation of Societal Impact: Why the Swiss Approach to Societal Impact is Productive and not Inexistent. Swiss Political Science Review. 10.1111/spsr.12618. https://onlinelibrary.wiley.com/doi/pdf/10.1111/spsr.12618

Abstract

Societal impact as a buzz word is high on the agenda of policy-makers around the world. Often, the UK Research Excellence Framework (REF) is cited as the initiator of making the societal impact of research relevant and visible. Equally often, it is said that in Switzerland societal impact is not yet considered in higher education policy. In this paper, I show that both claims are as wrong as they are common. I argue that the UK REF’s “impact agenda” is strongly linked to a specific ideology and does not represent the only approach to valorizing and fostering the research-society nexus. By pointing out some major issues in impact evaluation and presenting how the research-society nexus is discussed in Swiss science policy as a contrasting case, I sketch an approach to impact evaluation based on the role of research in society that considers different forms of knowledge generation and dissemination.

I was hoping to get more information about the Swiss approach to assessment of societal impact. It’s there, but it is presented in a strong critique of impact everywhere else. Well, not Canada, and not most places. But there is a brief compare and contrast to the UK REF and the Dutch SEP.

The author’s principal argument is that we can’t evaluate something if we don’t know what it is. I don’t disagree that we need more empirical research on societal impact. A lot of our understanding comes from practice not scholarship. But I disagree that “little is known about it even among experiences researchers and evaluators and even less do those who ask for it have a clear idea: the definitions in evaluation procedures and funding schemes remain more than vague”. On top of the many references in his article there are entire research units focusing on societal impact including RoRI and RURU.

Something I learned is that Mode 1 Research (traditional academic scholarship) and Mode 2 Research (“addresses problems set in a context of application, is transdisciplinary characterized by heterogeneity) was published in 1994 by Gibbons et al – check out the reference list. The author then spends about 1.5 pages explaining how Gibbons got it wrong. The critique is a good read and I have no basis to agree or disagree but I am not certain what this amount of argument adds to the author’s intent.

There is a good presentation on “The Insufficiency of Bibliometric Research Evaluation”. But within that the author states that “instead of evaluating whether research contributes to the role research should fulfil in society in the respective discipline, both metric and review-based evaluations of societal impact focus on easily available data or evidence, leading to a similarly reductionist approach to evaluation as the use of simple bibliometrics in academic impact evaluation.” I am confident that anyone supporting evidence gathering for any impact case study would disagree that the evidence is easily available.

My main problem isn’t with the argument – debate is welcome – but with the way the arguments are justified. The author cites individual examples and anecdotes to support his arguments. While he points out that a policy maker needs to have input from “established knowledge” (ie in a systematic review) not on a single study but then only offers single anecdotes to corroborate his arguments.

He states science and policy making need to be separate but he later states that “impact is an interplay of academic freedom, interaction between stakeholders and (public) discussion”. And his last sentence is “What should be gratified in evaluations is the diversity of research activities and outputs than combine to (co-)produce and disseminate research and to interact with stakeholders”.  It doesn’t sound like science and policy making are remaining separate in his world despite his own recommendation.

And finally this is what I take away about Swiss impact system – which is why I turned to this paper. There isn’t a central assessment scheme. There are universities and universities of applied science. And there are funders of basic research and applied research. And both can apply to both. Apparently, “each university is obliged to evaluate its research, but there is no centralised evaluation scheme. Each university has its own evaluation procedure adapted to its mission. Whether or not societal impact plays a role, and if so in what form, differs from institution to institution.” I would have liked to know more detail about how societal impact is assessed in those universities for which impact assessment is part of their evaluation. As the title suggested. I wanted to know why the Swiss system is productive.

Questions for brokers:

  1. Should we keep science and policy making separate or promote collaboration/co-production?
  2. Read up on instrumental and conceptual types of research impact. I think this article could have benefitted from an appreciation of conceptual impact.
  3. Apart from RoRI and RURU where do you turn to for expertise on impact?

Research Impact Canada is producing this journal club series to make evidence on knowledge mobilization more accessible to knowledge brokers and to facilitate discussion about research on knowledge mobilization. It is designed for knowledge brokers and other parties interested in knowledge mobilization.