Productive Interactions

This journal club entry considers two papers, both published in a special edition of Research Evaluation in 2011, so this journal club is a little longer than most. Both papers describe the use of “productive interactions”, a framework to illustrate the social impact of research in the social sciences.
Spaapen, J. & van Drooge, L. (2011). Introducing ‘productive interactions’ in social impact assessment. Research Evaluation, 20(3), 211-218. DOI: 10.3152/095820211X12941371876742
Abstract: Social impact of research is difficult to measure. Attribution problems arise because of the often long time-lag between research and a particular impact, and because impacts are the consequences of multiple causes. Furthermore, there is a lack of robust measuring instruments. We aim to overcome these problems through a different approach to evaluation where learning is the prime concern instead of judging. We focus on what goes on between researchers and other actors, and so narrow the gap between research and impact, or at least make it transparent. And by making the process visible, we are able to suggest indicator categories that arguably lead to more robust measuring instruments. We propose three categories of what we refer to as ‘productive interactions’: direct or personal interactions; indirect interactions through texts or artifacts; and financial interactions through money or ‘in kind’ contributions.
Molas-Gallart, J. & Pang, T, 2011. Tracing ‘productive interactions’ to identify social impacts. Research Evaluation, 20(3), 219-226. DOI: 10.3152/095820211X12941371876706
Abstract: This paper applies the SIAMPI approach, which focuses on the concept of productive interactions, to the identification of the social impact of research in the social sciences. An extensive interview programme with researchers in a Welsh university research centre was conducted to identify the productive interactions and the perceived social impacts. The paper argues that an understanding of and focus on the processes of interaction between researchers and stakeholders provides an effective way to study social impact and to deal with the attribution problem common to the evaluation of the social impact of research. The SIAMPI approach thereby differentiates itself from other forms of impact assessment and evaluation methods. This approach is particularly well-suited to the social sciences, where research is typically only one component of complex social and political processes.
There are a number of well-known problems confronting the practice of social impact assessment including lack of agreement on and availability of quantitative data, the diversity of audiences engaged in impact assessment and the issues of temporality (the long time span between research and impact) and attribution (the degree to which research as well as other inputs informed a decision). Furthermore, many methods of evaluating the impact of research are based on linear models whereas the literature demonstrates that most impact is produced in an iterative fashion. Both of these papers assume that impact is built on collaborations between researchers and research users and the more productive these collaborations (=interactions) then the greater the opportunity to produce an impact that has benefit on society. The authors “understand productive interactions as exchanges between researchers and stakeholders in which knowledge is produced and valued that is both scientifically robust and socially relevant.” And, “The interaction is productive when it leads to efforts by stakeholders to somehow use or apply research results or practical information or experiences.”  Productive interactions are important because “in order to have impact you have to have contact” (throughout the research process).
Spaapen identifies three kinds of productive interactions:

  1. Direct interactions: ‘personal’ interactions involving direct contacts between humans, interactions that revolve around face-to-face encounters, or through phone, email or videoconferencing.
  2. Indirect interactions: contacts that are established through some kind of material ‘carrier’, for example, texts, or artifacts such as exhibitions, models or films.
  3. Financial interactions: when potential stakeholders engage in an economic exchange with researchers, for example, a research contract, a financial contribution, or a contribution ‘in kind’ to a research programme.

Here’s one observation: productive interactions, at least for Spaapen, is a framework. It’s another freakin’ framework! See my rant on yet another framework in the PARiHS journal club entry. Spaapen admits he doesn’t have a tool but a framework that can be adapted to different situations.  Spaapen points out that “there are the three types of ‘productive interactions’ (direct interactions, indirect interactions, and financial interactions). For each of these it is possible to gather data about field-specific indicators that give information about contribution and uptake.” But that’s where his advice ends. He doesn’t indicate what those measures might be. He tells three stories that illustrate productive interactions but he doesn’t appear to use his framework to develop a method to put his framework into practice.
Molas-Gallart tells us he has gone one step further. Using reference to Spaapen’s paper, Molas-Gallart “used two structured questionnaires based on open questions: one for the researchers and another for stakeholders. The questionnaires were structured in three sections:

  1. The context of the research and its application environment;
  2. The productive interactions; and
  3. The outcomes and impacts associated with such productive interactions

He uses these two questionnaires on six different domains under a single ESRC research centre.  But that’s as far as he goes. He doesn’t present the text of his questionnaires and his data analysis comes across as qualitative case studies just like Spaapen’s cases. Indeed, Molas-Gallart says his analysis “can be seen as one more in a long line of ‘story-telling’ assessments of research impact.” That’s fine but we need more. We need quantitative metrics to complement the plethora of qualitative stories. Productive interaction might give us a framework to consider telling these stories according to the three kinds of productive interactions, but what does it really add to our collective quest for a holy grail of research impact? We have a new framework to tell stories but at the end of the day we are still telling stories. Important, and possibly now more clearly organized, but still telling qualitative stories.
One thing both authors did was “track forward” from collaboration to outcome/impact. This appears to be a more productive method than “tracking backwards” from outcome/impact to the research. One thing tracking forward requires you to do is have the capacity to go back to your collaborators or partners from a number of years ago (see the comments above on temporality) and ask what has happened since as a result of that collaboration.  Absent an institutional structure like a knowledge mobilization unit, most universities lack the resources to do this and provide little incentive to faculty to go back to collaborations that have already produced their academic outputs and track forward to non-academic impacts.
Interesting factoid – Spaapen quotes “it appears that 75% of successful innovation depends on social innovation, such as new forms of organizing work and relations, and only 25% on R&D and new knowledge”. If this is correct then there is a huge role for knowledge mobilization and related activities to play in innovation and the economy which depends on innovation and productivity.
Key points for brokers’ discussion:

  1. Can social media mediate productive interactions?
  2. Knowledge mobilization is a suite of services that supports collaborations between researchers and their non-academic research partners.  Since good collaborations = productive interactions, and we rely on telling stories to illustrate the impact of knowledge mobilization on non-academic decisions (rant above notwithstanding) how well can this framework help knowledge brokers tell their story? What does this mean for all those knowledge translation/transfer web sites that make research accessible to decision makers but don’t anchor the transfer within a collaboration or productive interaction?
  3. One by-product of Molas-Gallart was that the research centre identified that an assessment of productive interactions actually encourages collaboration since it focuses on the interactions of stakeholders as a necessary precondition to impact, rather than just focusing on the impact alone. If we relieve the pressure on researchers to demonstrate an impact (thank you REF 2014) can we actually enhance research impact by focusing the attention of researchers on collaborations and productive interactions?
  4. Isn’t this all a bit circular? We know that research utilization is enhanced when researchers and decision makers collaborate throughout the research cycle (thank you Sandra Nutley who told us this in 2007). If we need collaborations to make impact and we impact is generated as a result of productive interactions and productive interactions are really the same thing as collaborations…haven’t we been down this road before?

ResearchImpact-RéseauImpactRecherche (RIR) is producing this journal club series as a way to make evidence on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Contact your local college or university to get a copy of these articles. Read these articles. Then come back to this post and join the journal club by posting your comments.