Assessment, Evaluations, and Definitions of Research Impact: A Review

Penfield, T., Baker, M.J., Scoble, R. and Wykes, M.C. (2014) Assessment, evaluations, and definitions of research impact: A review. Research Evaluation, 23(1), 21-32.


This article aims to explore what is understood by the term ‘research impact’ and to provide a comprehensive assimilation of available literature and information, drawing on global experiences to understand the potential for methods and frameworks of impact assessment being implemented for UK impact assessment. We take a more focused look at the impact component of the UK Research Excellence Framework taking place in 2014 and some of the challenges to evaluating impact and the role that systems might play in the future for capturing the links between research and impact and the requirements we have for these systems.

This article is one among many discussing systems of research impact assessment. It is a good overview of why we do impact assessment, discusses a couple of different models, points out some of the pit falls and calls for systems and tools to do this work. Looking back five years since its submission in 2013 how much progress have we made?

The paper is delightfully brief on definitions. They are important but let’s not get stuck in thinking and get on with doing. One key definition is to appreciate the distinction between academic outputs (data, articles, patents, books, performances, etc.) and the impact on end beneficiaries that results from someone using those outputs to inform products, policies and services.

The paper presents five challenges (but few answers) with impact assessment, most related to the timeframe in which impact occurs:

1. Time lag: impact takes time: If you’re setting up a system of impact make sure you leave enough time between research outputs and impact assessment (but there is no advice on how long is long enough)

2. Developmental nature of impact: Impact doesn’t just happen one day. It develops over time. If you left more time would you see more impact?

3. Attribution: research evidence is only one input into observable change. See this blog post for more information.

4. Knowledge creep: Your research evidence also isn’t static. It grows over time. Our outputs will differ depending on when you assess.

5. Gathering evidence: if we gather evidence retrospectively much will be lost because people move in and out of roles. Collecting evidence prospectively addresses this (but that requires systems – see below).

The paper then proceeds to speak about systems for capturing the evidence of impact.

The evidence of impact is resistant to quantitative indicators. They only tell part of the story. Whatever system you develop needs to “capture links between and evidence of the full pathway from research to impact, including knowledge exchange, outputs, outcomes, and interim impacts, to allow the route to impact to be traced.” This evidence is made up of indicators, narratives, surveys, testimonials and citations in grey and policy literature and is most effectively expressed in case study formats.

Even a system wide effort like the UK REF that retrospectively developed 6,679 impact case studies across all disciplines has its limitations. As we move to systems, “the transition to routine capture of impact data not only requires the development of tools and systems to help with implementation but also a cultural change to develop practices, currently undertaken by a few to be incorporated as standard behaviour among researchers and universities.” The paper also concludes that “the development of tools and systems for assisting with the impact evaluation would be very valuable”.

And yet in 2016, The Research Impact Handbook provided general advice on capturing the evidence of impact in chapter 21 but no tools even though tools were presented for event planning, impact planning and stakeholder engagement.

Questions for brokers

1. Why are you doing your assessment? Do you want to advocate for change (i.e. more funding), be held accountable to your stakeholders, allocate funding and/or understand a system of impact so you can provide advice? These 4As are adapted from the reasons for impact assessment in the paper.

2. Read the post on attribution linked above and figure out if you care about attribution.

3. It’s 2018. We have the REF in the UK, the Netherlands Standard Evaluation Protocol, the Australian Engagement and Impact Assessment Pilot and Performance Based Research Funding in New Zealand. Why haven’t we developed any tools to help with this important but admittedly difficult undertaking?

Research Impact Canada is producing this journal club series as a way to make evidence on Knowledge Mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.

Health Research, Development and Innovation in England from 1988 to 2013: From Research Production to Knowledge Mobilization

Walshe, K. & Davies, H. T. O. (2013). Health research, development and innovation in England from 1988 to 2013: From research production to knowledge mobilization. Journal of Health Services Research and Policy, 18(Suppl. 3), 1–12.


This paper presents a critical analysis of the development of government policy and practice on health research, development and innovation over the last 25 years – starting from the publication of a seminal report from the House of Lords Science and Technology Committee in 1988. We first set out to map and analyse the trends in ideas and thinking that have shaped research policy and practice over this period, and to put the development of health research, development and innovation in the wider context of health system reforms and changes. We argue that though this has been a transformative period for health research, rather less progress has been made in the domains of development and innovation, and we offer an analysis of why this might be the case. Drawing on advances in our understanding about how research informs practice, we then make the case for a more integrative model of research, development and innovation. This leads us to conclude that recent experiments with Collaborations for Leadership in Applied Health Research and Care and Academic Health Science Centres and Networks offer some important lessons for future policy directions.

It’s not a new article but I doubt much has changed in the last five years. This is not an article that will necessarily speak to your knowledge mobilization practice but it will speak to how you locate your practice in the broader system(s) of research.

This article reviewed 13 major reports on Britain’s health research system published between 1988-2011. The basic conclusion of the first half of the article is that the primary beneficiaries of health research funding have been health researchers themselves. Despite all the calls for research to improve human health we have instead been feeding the academic research enterprise. “Overall, it can appear that the research enterprise itself is the purpose of health research, and that the long-standing and oft-stated ambition that research strategy should more closely serve the needs of the [National Health Service] has not yet been fully achieved.”

It is important to note that neither of the authors are health researchers. Both study the business of health from the perspective of faculty in business schools in Manchester and St. Andrews.

The article then looks at the role of innovation and impact – essentially getting a public return from the public investments in health research. As the authors state “how evidence from research is used, disseminated, understood, translated, mobilized or applied, and what effects are seen on how health care systems work and how health care is delivered to Patients.”

They cite two experiments in combining research and health application: the Collaborations for Leadership in Applied Health Research and Care (CLAHRCs) and Academic Health Science Centres and Networks (AHSCs and AHSNs), and point out how these offer some important lessons for future policy directions. In both experiments “for the first time, the primary mission was presented as knowledge mobilization, rather than research productionWe would point to the development of organizational capacity in knowledge mobilization; the creation of local and organizational research priorities and agendas; the explicit linking of research to knowledge mobilization; and the requirement for organizational co-investment in, and commitment to, research and knowledge mobilization.”

This is presented in an evolving health care environment where increasingly the frail elderly are presenting with multiple conditions, primarily chronic in nature, are heavy users of the health system and require attention also from social care systems. All of this in an age of austerity which demands public returns on public investments.

These problems are mostly concerned with service issues like pathway and process redesign, safety and quality; organizational issues like coordination, integration and networking; workforce issues like training and skill mix; and patient issues like experience, education and empowerment. Yet the research enterprise remains largely focused on life sciences and bio-medicine and on the development and evaluation of technologies like drugs, diagnostic tests and devices.

There is a disconnect between the research agenda (directed by researchers wanting to make technologies) and the health agenda (ideally directed by patient need wanting reduced wait times and linking of health and social care services).

Questions for brokers:

1. This is about health research. While there may not be the same number of government reviews of the broader research system do you think the conclusions will apply to natural sciences, engineering, social sciences and humanities research?

2. Read up on CLAHRCs as there have been several studies about them. They are a useful construct that don’t exist explicitly in Canada but from which many of us in the research to impact space (in any discipline) can learn.

3. Is your research to impact practice located in a system driven by researchers or by end beneficiaries? Does this make a difference to achieving your impact goals?

Research Impact Canada is producing this journal club series as a way to make evidence on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.

Social Impact of Participatory Health Research: Collaborative Non-Linear Processes of Knowledge Mobilization

Abma, T.A., Cook, T., Rämgård, M., Kleba, E., Harris, J. & Wallerstein, N. (2017) Social impact of participatory health research: Collaborative non-linear processes of knowledge mobilization. Educational Action Research, 25(4), 489-505.


Social impact, defined as an effect on society, culture, quality of life, community services, or public policy beyond academia, is widely considered as a relevant requirement for scientific research, especially in the field of health care. Traditionally, in health research, the process of knowledge transfer is rather linear and one-sided and has not recognized and integrated the expertise of practitioners and those who use services. This can lead to discrimination or disqualification of knowledge and epistemic injustice. Epistemic injustice is a situation wherein certain kinds of knowers and knowledge are not taken seriously into account to define a situation. The purpose of our article is to explore how health researchers can achieve social impact for a wide audience, involving them in a non-linear process of joint learning on urgent problems recognized by the various stakeholders in public health. In participatory health research impact is not preordained by one group of stakeholders, but the result of a process of reflection and dialog with multiple stakeholders on what counts as valuable outcomes. This knowledge mobilization and winding pathway embarked upon during such research have the potential for impact along the way as opposed to the expectation that impact will occur merely at the end of a research project. We will discuss and illustrate the merits of taking a negotiated, discursive and flexible pathway in the area of community-based health promotion.

Use this if you ever needed a reference about the importance of engaged scholarship (and the *very* many acronyms presented in this paper) over knowledge translation. This article complements a previous journal club entry from Bowen and Graham. The basic tenant (which isn’t news to any of us) is that iterative models of engaged scholarship are more powerful than linear models of knowledge translation to move beyond health research to outcomes (new health practices and policies) that will have a social impact. The authors prefer the iterative concept of knowledge mobilization than the linear concept of knowledge translation (personal commentary: SSHRC is grinning, CIHR not so much).

Traditional health (and other) research creates epistemic injustice. That’s an interesting new term. “Epistemic injustice arises when a person is not seen as credible as compared to providers, and this power differential is exacerbated when a patient doesn’t understand the language or comes from marginalized circumstances. This power inequity arises equally in health research when outsiders decide the research agenda and direct the process.”

Traditional scholarly production works in a reductionist manner, analyzing and describing complex phenomena by focusing on simple or fundamental constituents divorced from the complexity of contexts. Traditional research stands apart from and observes context. Engaged scholarship is embedded and engaged with the practices it is studying. It aims for joint understanding (not just the understanding of the researcher) and seeks to develop plans for action (i.e. observing and concluding without identifying action is not sufficient). And I would add that the most common of academic action items, “more research is needed”, is not a satisfactory action plan for non-academic participants.

If traditional research paradigms foster epistemic injustice then it follows that engaged scholarship fosters epistemic justice and can lead not just to “communicating research findings and delivering a ‘product’ or ‘service’ for society, it involves fostering social change within the wider complex social system in which the research is taking place”. Knowledge translation is predicated on disseminating knowledge. Knowledge mobilization and engaged scholarship seeks to support broader social impact.

There is an amusing moment in Figure 1 which is a conceptual model (yes, yet another conceptual model) of participatory research. It creates a framework which starts with 1) consideration of the context of research that moves to 2) partnership processes which proceed the 3) research and intervention function which creates 4) intermediate outcomes and long term impacts all connected by unidirectional arrows. The authors have created a linear conceptual framework after taking 10 pages to illustrate how linear models of knowledge translation are insufficient.

However, as I point out in our own paper on the co-produced pathway to impact, linear models are okay but only when they are describing a system of research. No individual engaged research project is linear but a system of engaged research projects seeks to progress from stakeholder engagement to impact, otherwise what’s the point of doing research?

Questions for brokers:

1. Traditional scholarship is reductionist. Engaged scholarship is embedded in multi-sectoral context and multidisciplinary complexity. Do we (and if so, how do we) maintain both traditional and engaged scholarship as complementary approaches?

2. Epistemic injustice. How is your practice working to counter epistemic injustice?

3. Linear models of impact are okay. Discuss.

Research Impact Canada is producing this journal club series as a way to make evidence on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.

Research Impact: A Narrative Review

Greenhalgh, T., Raferty, J., Hanney, S., & Glover, M. (2016). Research impact: A narrative review. BMC Medicine, 14(78), 1-16.


Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of impact is not without its critics. We review the strengths and limitations of six established approaches (Payback, Research Impact Framework, Canadian Academy of Health Sciences, monetisation, societal impact assessment, UK Research Excellence Framework) plus recently developed and largely untested ones (including metrics and electronic databases). We conclude that (1) different approaches to impact assessment are appropriate in different circumstances; (2) the most robust and sophisticated approaches are labour-intensive and not always feasible or affordable; (3) whilst most metrics tend to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact link can and should be measured; and (4) research on research impact is a rapidly developing field with new methodologies on the horizon.

This article is an important review of some of the dominant frameworks for research impact assessment (RIA) and points the way to some new arrivals on the scene. Anything by Trish Greenhalgh is good so this was an easy pick for this month’s journal club.

The frameworks review Payback, Research Impact Framework, Canadian Academy of Health Sciences (CAHS), monetisation, societal impact assessment and UK Research Excellence Framework. Context is interesting here as I had never heard about monetisation or societal impact assessment and this review is lacking the Knowledge to Action Cycle which is dominant in Canada.

Bottom line: if you want to do it well it really doesn’t matter what framework you choose. It will be time consuming, require dedicated skills and, therefore, be expensive.

If you don’t want to dive into the deep end of the RIA pool, the research impact framework was developed by and for academics interested in measuring themselves. “As such, it is a ‘light touch’ checklist intended for use by individual researchers who seek to identify and select impacts from their work “without requiring specialist skill in the field of research impact assessment”. Apart from this, all others were labour intensive.

A couple of things that stand out that aren’t in the abstract (which is a great summary of the take home information):

• The SPRIT Action Framework (a newer model) employs a logic model (as most do), but unusually, the ‘logic model’ focuses not on the research but on the receiving organisation’s need for research. I find this compelling because impact is a function of our industry partners making products, our government partners developing policies and our community partners delivering social services. It is true that clinical research or education research in the classroom can make an impact on immediate patients or students but the opportunity for this research is to scale beyond the single clinic or classroom. To assess the impact of research you *must* engage those who are using it as well as those producing it. This is the basis of the co-produced pathway to impact which would be interesting to compare against these models (as I have already done).

• Contribution mapping is a new approach in RIA. “In this model, the focus is shifted from attribution to contribution – that is, on the activities and alignment efforts of different actors (linked to the research and, more distantly, unlinked to it) in the three phases of the research process (formulation, production and extension”. Contribution mapping aligns with a performative paradigm (column 6 in table 1 in the article). For more on contribution analysis see a previous journal club post.

Questions for brokers:

1. What framework is the basis for your research impact assessment approach? Does it focus on the producers of research or the users of research?

2. Are you a researcher (or a research administrator) using Researchfish to capture the evidence of impact? Why are you capturing the evidence of impact from a researcher during or at the end of a research study when the researcher isn’t the one making the impact (see above) and the impact hasn’t usually occurred by the end of the study?

3. Are you an experience research impact assessor? If not who are collaborating with that has these skills? Where can you go to build your skill set in research impact assessment?

Research Impact Canada is producing this journal club series to make evidence on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.

The Societal and Economic Impacts of Academic Research: International perspectives on good practice and managing evidence

Digital Science (2016). The Societal and Economic Impacts of Academic Research: International perspectives on good practice and managing evidence. Report available from


This report was created to support a workshop in London in March 2016, supported by the Higher Education Funding Council for England (HEFCE). The objectives are to encourage researchers across all disciplines to reflect on what socio-economic research impact means for their areas of interest and what types of evidence best reflect achievement.

When impact case studies were added to the UK’s 2014 Research Excellence Framework they created a new way of looking at what research delivers. This has proven remarkably amenable and incredibly revealing. There is a flavour to research outcomes that analytical indicators can never provide. But this was the first time such an exercise had been tried across all subjects in all universities in one cycle. One very flexible template fitted all. Now, with this experience, we have an opportunity to reflect on what worked and what could be improved.

Disciplinary communities must reflect on what they believe culturally constitutes proper, acceptable and appropriate evidence of economic, social or other impact and what constitutes strong or weak levels of achievement. It seems unlikely that broad-based sciences and arts will conceptualise impact, evidence of impact and assessment of impact in the same ways. There may also be divergence between professionally-focussed areas, like social policy, and their background academic disciplines, like sociology. And, whereas citation impact is used in the same way across continents, does the cultural construction of research impact allow it to become a global comparator?

This report is a reflection on the UK Research Excellence Framework (REF) and introduces some international perspectives on impact. It provides several useful conclusions about impact assessment and systems of impact assessment. It lacks a critical or empirical research lens but the people and organizations providing the commentary are recognized leaders in their fields. We can learn much form these leaders as Canada and other countries start to think about impact assessment.

Impact assessment is hard and expensive but worthwhile. The impact case study portion of the REF cost £55M but drove the allocation of over £1.6B in investment in higher education. This is a 3.4% transaction cost which is much less than the 13% transaction costs for grant writing.

There were four types of evidence used to substantiate claims of impact: stakeholder information, testimonials, on line traffic, positions of responsibility. Figure 1 breaks these down into more granular forms of evidence (i.e. letters of support and focus groups as two of seven types of testimonial evidence). Testimonials were most frequently used evidence type for the Physical Sciences & Engineering, Social Sciences, as well as the Arts and Humanities assessment panels with the Biological Sciences & Medicine panel using reports more frequently.

The report advocates for evidence collection during the conduct of the research, not just retrospectively. However, this requires impact to be planned (ex ante) rather than waiting until after the project has finished (ex post). This also requires some system to collect, gather and store the different forms of evidence of impact. Commercial products such as Research Fish and VV Impact Tracker can help with this. Likely a good electronic filing system and excel will work for a small-scale impact assessment.

Related to this the report states “Experience enabling clients to report research impact confirms that planning for impact is best done at the beginning of the research process, putting in place impact data and evidence capture as the project is being conceptualised.” See a Mobilize This! blog post about how this ex ante (planned at the beginning) impact assessment is the same thing as knowledge mobilization planning. When you plan your knowledge mobilization (i.e. impact) strategy you need to develop a theory of change (i.e. impact pathway) which will inform the indicators you need to measure along the way and at the end of the process. Knowledge mobilization planning is ex ante impact assessment.

Australia has launched their research engagement and impact pilot. While they fall short of saying engagement metrics are a proxy for impact the theory is that engagement is a necessary precursor to impact, something I wrote about more than two years ago. Engagement should not be assessed separately from impact. Engagement metrics can be measured along the pathway from research to impact but they should not be conflated with actual impact.

Questions for Brokers:

1. Discuss: You can’t have impact without engagement but you can have engagement without impact.

2. Commercial systems such as Research Fish and VV Impact Tracker are used – usually -by academic institutions to capture the evidence of impact from – usually – researchers. But the forms of evidence of impact (Figure 1) are derived from research users/partners. Why do we ask researchers to gather the evidence of impact when they aren’t the ones making the impact?

3. Impact assessment is hard but worthwhile. In Canada, we don’t (yet) have a REF like system that requires impact assessment. Why would we bother doing this if we don’t have to?

Research Impact Canada is producing this journal club series to make evidence on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.

Towards a Theory of Change for Community-based Research Projects

Janzen, R., Ochocka, J. & Stobbe, A. (2015). Towards a Theory of Change for Community-based Research Projects. Engaged Scholar Journal, 2(2), 44-64.


The purpose of this article is to present a preliminary theory of change for community-based research projects. The theory of change emerged from a Canadian Summit titled, “Pursuing Excellence in Collaborative Community-Campus Research.” The article begins by providing a rationale for why a theory of change could be helpful to advance the agenda of community-based research (i.e., concept clarification, guide to action, and quality assessment). Next we describe how our preliminary theory of change was developed, before outlining the theory of change under the headings of activities, intended outcomes, and sample indicators. We conclude by discussing what is needed in order to deepen our understanding of the theory of change for community-based research projects.

This article describes a theory of change for community based research. This is relevant to broader knowledge mobilization since both KMb and CBR involve collaborations (i.e. co-produced research) between academic and non-academic partners, are responsive to real world challenges, and are focused on change making.

A theory of change describes the anticipated processes in a chain of intended outcomes. It describes the steps from inputs to activities to outcomes and impacts. In knowledge mobilization terms a theory of change is equivalent to an impact pathway. We have recently described a co-produced pathway to impact (CPPI) that is a theory of change for research impact. A theory of change predicts outcomes and is used prospectively. It is a required element for many impact oriented grant applications. If you can’t describe what you think will happen then how can you plan your activities and why should someone else fund it?

The four components of the theory of change are: 1) laying the foundation; 2) planning the research; 3) gathering information and analysing it; and, 4) acting on the findings. Typical of all theories of change, impact pathways and frameworks, these four components are not presented in a linear fashion but in a cycle with arrows looping forward and backward (but check the CPPI in the link above for a discussion of why linear is ok when working in systems).

The paper has a number of lists: three hallmarks of CBR, three main functions of CBR, four components in the theory of change. I am not certain how these lists link together or if they are presented independently. Nonetheless, what is very helpful is the table if indicators the authors present to help collect evidence for research process (relevance to communities, meaningful participation); research rigour (meaningful and useful data and interpretation); and research impact (mobilization of knowledge, mobilization of people).

But here’s the thing about any pathway to impact or theory of change: the best a pathway can provide is generic advice. A pathway (or framework) must be implemented in a specific fashion for each project. It’s not about research but what research (what data sets, what methods). It’s not about a partner but what partner (public, private or non-profit organization, local or multinational). It’s not just about impact but what impact (what indicators, collected from whom, collected when). As the authors state “The challenge is to develop a theory of change that is flexible enough to adapt to each unique research project, while also providing the implementation commonalities to aid with research planning and management across projects”. Research on the KTA Cycle showed that most people just cite the pathway as their framework. They don’t implement the pathway in a specific fashion.

Questions for brokers:

1. Is there a difference between a framework (like KTA Cycle, Canadian Academies of Health Sciences Impact Assessment Framework) and a theory of change (like this CBR example, the CPPI)?

2. What is your theory of change? How have you specifically implemented that pathway to fit your project?

3. What are your indicators of impact? When are you collecting them from whom? Check Table 1 for categories of indicators – and don’t forget to adapt them to your specific context.

Research Impact Canada is producing this journal club series as a way to make evidence on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.

It Takes Two to Tango: Knowledge Mobilization and Ignorance Mobilization in Science Research and Innovation

Gaudet, J. (2013). It takes two to tango: Knowledge mobilization and ignorance mobilization in science research and innovation. Prometheus, 31(3), 169-187.


The main goal of this paper is to propose a dynamic mapping for knowledge and ignorance mobilization in science research and innovation. An underlying argument is that ‘knowledge mobilization’ science policy agendas in countries such as Canada and the United Kingdom fail to capture a critical element of science and innovation: ignorance mobilization. The latter draws attention to dynamics upstream of knowledge in science research and innovation. Although perhaps less visible, there is ample evidence that researchers value, actively produce, and thereby mobilize ignorance. For example, scientists and policymakers routinely mobilize knowledge gaps (cf. ignorance) in the process of establishing and securing research funding to argue the relevance of a scientific paper or a presentation, and to launch new research projects. Ignorance here is non-pejorative and by and large points to the borders and the limits of scientific knowing – what is known to be unknown. In addition, processes leading to the intentional or unintentional consideration or bracketing out of what is known to be unknown are intertwined with, yet remain distinct from, knowledge mobilization dynamics. The concepts of knowledge mobilization and of ignorance mobilization, respectively, are understood to be the use of knowledge or ignorance towards the achievement of goals. The value of this paper lies in its conceptualization of the mobilization of knowledge as related to the mobilization of ignorance within a complex, dynamic and symbiotic relationship in science research and innovation: it takes two to tango.

Joanne Gaudet (@gaudetj_99) guest blogged about ignorance mobilization on Mobilize This! in January 2013. Her paper was published later that year and I thought it would be interesting for the journal club to step back from our practices and think about the words we use to describe our work.

Ignorance mobilization needs to be defined but before we define ignorance mobilization we need to define ignorance. Ignorance not in a pejorative sense but “as the limits and borders of knowing – what scientists know that remains unknown in any given area of science.” Ignorance is the lack of knowledge and for scientists that means the questions that arise from discovery of new knowledge. As we discover new knowledge we raise questions about what remains unknown. That is ignorance and it can be mobilized. When a researcher publishes a scientific paper s/he almost always writes about what questions (whose answers are unknown) arise from the project or what the next steps will be to learn what is unknown.

In a reference to the title of the article, Joanne explains how knowledge and ignorance are interconnected. “Knowledge remains provisory in relation to ignorance (and vice versa) in a complex, dynamic and symbiotic relationship: it takes two to tango.”

Furthermore, ignorance drives innovation. Research seeks to know what is unknown and develops new approaches to products, policies and services that are only improved by mobilizing ignorance to develop new knowledge.

Are you with me?

Knowledge mobilization is mobilizing what is known. Ignorance mobilization is mobilizing what is unknown.

If you are mobilizing knowledge you are disseminating what is already known by making knowledge accessible in alternative formats to enhance its uptake by wider audiences than traditional scientific publication. CIHR calls that end of grant KT. I call it dissemination.

If you are mobilizing ignorance you are working to develop knowledge about what is unknown. Anyone practicing integrated KT (i.e. collaborations between academic researchers and non-academic partners) is actually practicing ignorance mobilization. Any collaboration starts with a shared interest in a question about something that is unknown. In other words a collaboration starts with a shared interest in ignorance. Collaborating on ignorance is ignorance mobilization.

We disseminate knowledge but we collaborate on ignorance.

Are you with me now?

Ignorance drives innovation (see above) and is also used to inform policy. Joanne claims that “when policymakers pay attention to expert opinion on knowledge gaps, they are mobilizing ignorance – using active non-knowledge in attempts to reach political goals.” Helping end users achieve a goal is the raison d’être of knowledge and ignorance mobilization.

I find this paper interesting but I do not see how it helps my knowledge mobilization practice except to give me a new perspective on what I am doing. The models she presents of interactive knowledge and ignorance mobilization are from her thinking in 2011 or 2012 (the paper was published in 2013). Our understanding of pathways from research to impact and the role of knowledge brokers in both dissemination and integrated methods of knowledge/ignorance mobilization have become more sophisticated. I don’t recommend getting lost in her models. But I do encourage all of us to take a step out of the weeds of our daily work and consider other perspectives.

Questions for brokers:

1. What percentage of your effort is spent mobilizing knowledge (dissemination or end of grant KT) vs supporting collaborations on ignorance (integrated KT)?

2. Ignorance mobilization: An interesting theoretical concept or a distraction from getting my job done. Discuss.

3. How much time do you wish you had to think about your work to better inform how you do it?

Research Impact Canada is producing this journal club series as a way to make the evidence and research on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article, then come back to this post and join the journal club by posting your comments.

The “Dark Side” of Knowledge Brokering

Kislov, R., Wilson, P. & Boaden, R. (2016). The “dark side” of knowledge brokering. Journal of Health Services Research & Policy, 22(2), 107–112.


Deploying knowledge brokers to bridge the ‘gap’ between researchers and practitioners continues to be seen as an unquestionable enabler of evidence-based practice and is often endorsed uncritically. We explore the ‘dark side’ of knowledge brokering, reflecting on its inherent challenges which we categorize as: (1) tensions between different aspects of brokering; (2) tensions between different types and sources of knowledge; and (3) tensions resulting from the ‘in-between’ position of brokers. As a result of these tensions, individual brokers may struggle to maintain their fragile and ambiguous intermediary position, and some of the knowledge may be lost in the ‘in-between world’, whereby research evidence is transferred to research users without being mobilized in their day-to-day practice. To be effective, brokering requires an amalgamation of several types of knowledge and a multidimensional skill set that needs to be sustained over time. If we want to maximize the impact of research on policy and practice, we should move from deploying individual ‘brokers’ to embracing the collective process of ‘brokering’ supported at the organizational and policy levels.

Not a research article but an article based in the literature and the authors’ experiences as knowledge mobilization practitioners. This makes it potentially more interesting since it is based on lived experience not detached observation.

The authors critically assess the usual position of the “knowledge broker as an unquestionable enabler of evidence-based medicine, enhancing the flow of knowledge between researchers and practitioners” by examining the “growing evidence about the unintended consequences of deploying knowledge brokers in health care which are often overlooked”. This critical lens is an important juxtaposition to the many articles that report positive outcomes of knowledge brokering.

This journal club has in the past presented articles that take a critical look at brokering including the challenging career paths of knowledge brokers. In addition, Dobbins et al published on the lack of evidence for effective knowledge brokering in a randomized controlled trial. The authors of the current article point out a number of challenges brought about by the intermediary nature of knowledge brokers. These include:

• The tendency of knowledge brokers to focus on information management techniques as opposed to linkage/exchange and capacity building since the latter are more time intensive and the former are more easily standardized and evaluated in practice

• The focus on explicitly codified knowledge vs. tacit knowledge

• The temptation to slip from facilitating knowledge uptake to doing knowledge uptake on behalf of end users especially if the broker is a subject matter expert

• The ambiguity and instability of the “in between-ness” of knowledge brokers

• Lack of role clarity, guidance and career path

The authors propose a shift from individual knowledge brokers to knowledge brokering teams. “We call for a major shift from this perspective towards embracing knowledge brokering as an inherently collective process unfolding at the team level and actively supported by the broader organization. If we want to maximize the impact of research on policy and practice, we should move from deploying ‘brokers’ to embracing ‘brokering’.”

The authors describe what needs to happen to accomplish this:

“The first step in this direction is to foster brokering teams composed of people with different professional backgrounds and having complementary skills. These skills should combine those needed for successful information management, linkage and exchange, and capacity building with broader clinical, managerial and contextual knowledge. This may be achieved by the involvement of academics, clinicians, managers, information scientists and service users.”

“Second, organizations deploying knowledge brokers should recognize brokering as part of their ‘core’ business, providing a range of learning, development and promotion opportunities to staff occupying the ‘in-between’ roles. Supporting the knowledge brokers’ communities of practice and creating regional or national forums for staff occupying intermediary roles can help alleviate their sense of isolation and enable peer-to-peer learning.”

Questions for brokers:

1. Are you a lone broker in your organization/project? Do you experience this dark side?

2. The authors call for communities of practice and regional/national forums. Have you signed up (for free) for the KTE CoP (mainly Canadian) or the Knowledge Brokers Forum (international) or the Knowledge into Practice Learning Network (international)? Do you attend the Canadian Knowledge Mobilization Forum?

3. The authors state, “organizations deploying knowledge brokers should recognize brokering as part of their ‘core’ business.” Is knowledge mobilization a priority in your organization’s strategic plan?

Research Impact Canada is producing this journal club series as a way to make the evidence and research on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article, then come back to this post and join the journal club by posting your comments.

Faculties of education and institutional strategies for knowledge mobilization: An exploratory study

Sá, C.M., Li, S.X. & Faubert, B. (2011). Faculties of education and institutional strategies for knowledge mobilization: An exploratory study. Higher Education 61(5), 501-512. doi:10.1007/s10734-010-9344-4.

You can request a copy of the article from ResearchGate here


The goal to enhance the impacts of academic research in the ‘real world’ resonates with progressive visions of the role of universities in society, and finds support among policy makers who have sought to enhance the ‘transfer’, ‘translation’, ‘uptake’, or ‘valorization’ of research knowledge in several areas of public services. This paper reports on an exploratory study of the strategies used by selected Canadian and international faculties of education to mobilize research knowledge. Drawing on data from semistructured interviews with senior administrators of 13 faculties of education, the analysis reveals several themes. Academic leaders recognize knowledge mobilization as a desirable institutional mission, but they identify a number of barriers to greater efforts in this area. Although a number of strategies are employed, changes across multiple organizational dimensions to encourage and support knowledge mobilization were reported at only two institutions. These results are relevant to faculty administrators, scholars, and policymakers interested in understanding the role of academic institutions in the mobilization of research knowledge to the broader education community.

This is another journal club about an article on institutional roles in knowledge mobilization. You can see recent others here and here. I am writing about these recently because I think we have lots of literature about the functions of researchers, partners and knowledge brokers but less on what institutions can do to support the work of the people who actually (co-)create the evidence and translate that into impact.

Should academic institutions play a role in knowledge mobilization? The authors acknowledge the dissenting opinions. “The notion that universities should take deliberate steps in this area is also subject to criticism. Some construe these efforts as an encroachment of utilitarian and instrumental views of the role of universities in society; others believe that emphasizing the external impact and uptake of research threatens forms of inquiry that do not lend themselves to immediate applications.” And then they move on. Nicely done.

If we accept that knowledge mobilization and related activities are functions of every SSHRC, CIHR, CRC and CFI (see Benefits to Canada) and health charity and most NSERC grant applications then I believe it is incumbent on institutions to support their faculty and students in these endeavours.

The authors quote literature identifying potential roles of institutions. Knowledge mobilization can be facilitated in academic institutions through changes in five areas: revising promotion and tenure guidelines to encourage and reward knowledge mobilization; proving funding and organizational resources such as opportunities for networking, skills training and administrative support; developing facilitating internal structures such as establishing dedicated offices; enhancing organizational orientation towards knowledge mobilization; and standardizing knowledge mobilization practices within the institution.

Five areas…how is your institution doing?

One area that didn’t come up in this research is the role of institutional planning. If knowledge mobilization and impacts of research are not in the university academic plan or strategic research plan (or equivalent in your institution) then it will only be a marginal activity on soft money. Planning drives resources which then enable activities like these five areas above. This is mentioned obliquely under “institutional priority and supports” but then goes on to describe supports and not the role of planning in identifying institutional priorities.

There were 5 Canadian Faculties of Education among the 13 interviewed. Only two (Melbourne and London, UK) has structures to support knowledge mobilization.

Sorry but this is wrong because the methodology was focused solely on the Faculties of Education. York University (a participating university) has a centralized knowledge mobilization unit under the VP Research & Innovation that provides services across the campus including to the Faculty of Education. Asking the Faculties to speak about their organizational contexts would have picked this up.

Table 2 identifies the barriers to Faculty level supports for knowledge mobilization including money, time, divided attitudes, difficulty assessing outcomes and setting targets and because it is hard plus others. Wait…we don’t do it because it is hard…give me a break. That’s a justification for needing professional supports not a justification to not do it when every SSHRC grant (which funds education research) requires it.

Rant over. Journal club almost over.

Questions for brokers:

1- The literature identifies five areas where institutions can support knowledge mobilization. How is your institution doing?

2- Is knowledge mobilization part of your institution’s planning? If not, how might you move this into institutional planning?

3- Institutional supports for knowledge mobilization: Faculty-based or a central service unit. Which is better?

Research Impact Canada is producing this journal club series as a way to make the evidence and research on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article, then come back to this post and join the journal club by posting your comments.

Institutionalising Evidence-Based Policy: International Insights into Knowledge Brokerage

Lenihan, A. T. (2015). Institutionalising evidence-based policy: International insights into knowledge brokerage. Contemporary Social Science. 10(2), 114-125.


Numerous organisations act as ‘evidence brokers’, providing and translating research for use by decision-makers. The relationship between the supply and demand for evidence is far from linear, and whether these organisations are self-professed evidence brokers or government appointed bodies, they face similar challenges in their quest to impact policy. This paper analyses the strategies of two organisations considered ‘exemplars’ of institutional knowledge brokerage: the CPB Netherlands Bureau for Economic Policy Analysis and the Washington State Institute for Public Policy. The author posits that three primary factors help these organisations connect evidence successfully to policy-makers: the institution’s credibility, based on independence, neutrality, reputation, trust, transparency and the quality of its methods and evidence; the utility of its research, based on transferability, timing, stakeholder involvement and resonance with policy-makers; and the communication of that research, in terms of effectiveness, dissemination, presentation and translation for policy-makers. Findings, and the possibility of applying these insights internationally, are then discussed and contextualised.

This article isn’t open access, sorry. If you want a copy you can request one from Dr. Lenihan directly at

This is another article that discusses the role of organizations as knowledge brokers. I recently reviewed another article on this topic. As opposed to that previous article which was about networks as knowledge brokers, this article is about discrete organizations working as knowledge brokers to inform policy decisions. It is good to see literature about the role of knowledge brokering organizations to complement the literature on human knowledge brokers.

Critique #1: the author references the status quo as the “two communities” approach from Caplan, 1979. We have long abandoned such reductionist views. Policy and research might sit in different organizational constructs but researchers and policy makers share spaces, collaborate and exchange.

Studying two different knowledge brokering organizations the author concludes that there are three organization factors that define successful knowledge brokering organizations:

1. The institutional credibility
2. The utility of the research
3. The communication of that research

Makes sense, no argument here. What I find interesting is to compare that to the three elements of the PARIHS framework that speak to the factors of evidence that enable evidence use:

1. The credibility of the evidence itself
2. The context of its use
3. The facilitation dedicated to supporting uptake of evidence

It seems there are elements of our work that transcend different contexts. The factors that support evidence use are similar to the factors that support institutional knowledge brokering and these seem to transcend contexts. I recently blogged on an article I published about context being important but not for the reasons you think.

Critique #2: “Effective communication of research to policy makers can be paramount to its utilisation”. Not wrong but certainly incomplete. Metrics of communications (i.e. social media analytics, downloads, media impressions) are at best proxies of use. You can have use without communication but you can communicate the heck out of your evidence and never get it used. We know from the evidence on evidence use that integrated (i.e. collaborative) forms of knowledge brokering are more effective than dissemination (i.e. communication) forms. Don’t just communicate evidence but actively facilitate its uptake by doing engaged workshops with decision makers (for example).

Here’s a fun fact: both organizations studied were inside government. They appear to be working as independent (“We are specifically non-partisan. We do what we are assigned and we don’t make recommendations”) research organizations (i.e. think tanks) generating evidence for policy decision making. This doesn’t affect the utility of the evidence or how it is communicated but I feel it draws into question the claims to credibility of the organization. If it is by government for government then government has to claim it is credible. If it isn’t credible then government will shut it down.

Questions for brokers

1. Why do you think that certain enabling qualities can apply to evidence and organizations? What underpins this commonality?

2. Both organizations are independent evidence brokering organizations. If they are inside government how independent can they be (don’t forget to reference Stephen Harper in your discussion)?

3. How might these three factors underpinning successful knowledge brokering organizations map onto universities seeking to also provide evidence to policy makers? How does a university ensure credibility, utility and communication of evidence when it isn’t the university, but the researcher, who is generating the evidence?

RIR is producing this journal club series as a way to make the evidence and research on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article, then come back to this post and join the journal club by posting your comments.

The Impact of Higher Education Leadership, Management and Governance Research

Morrow, S. (2016). The Impact of Higher Education Leadership, Management and Governance Research: Mining the 2014 Research Excellence Framework Impact Case Studies. Report from Leadership Foundation for Higher Education.

Elizabeth Morrow is an independent researcher who produced this report for the Leadership Foundation for Higher Education. It is an exhaustive review (121 pages) of 1,309 Research Excellence Framework (REF) impact case studies from legal, governance and management (LGM) research. This report is important not for LGM research (well it is…but that’s not why I find it important) but because it is a different look at the REF. The REF is the only system wide (every UK university participated) research impact assessment process in the world so we can learn a lot about the opportunities and challenges to research impact assessment by studying the REF.

Why should knowledge mobilizers care about the REF? We care because impact is what we want to achieve and knowledge mobilization is one way to maximize impact. When evaluating our work impact is the dependent variable and knowledge mobilization is the independent variable (see more on this here). So we should care about the REF if we want to measure the effects of our knowledge mobilization efforts.

This is such a big report it is impossible for me to pull out all the good and interesting sections so I will use the limited space of this journal club to present the Adaptive Systems Framework for Advancing Research (AS-FAR) model. AS-FAR appears on page 86 of the report and is summarized on this HEFCE blog post. Click that link now and take a look at the model.

I wrote to Elizabeth Morrow. I thanked her and observed the report and hence AS-FAR are derived from retrospective cases and I struggled to try to think about applying AS-FAR in a prospective manner despite the report claiming this was possible. My world is mostly about helping researchers and partners collaborate to co-produce research that has both academic merit and is useful for the partners who create the impact. I struggled to work through AS-FAR in this prospective, co-production fashion. I found AS-FAR was more easily understood retrospectively collecting evidence and creating the story of impact using transfer and exchange mechanisms.

I frequently critique academic frameworks since they describe a gold standard process or model yet I work in a world that never hits a gold standard. I find many frameworks describe what should be and I work in messier realities of what is. I struggle with thinking how I would implement AS-FAR sitting across from a research partnership helping them develop their impact strategy for their grant application.

Elizabeth responded acknowledging the complexity and indicated her current project is developing a Research Leader’s Impact Toolkit focusing more on what research leaders can do to support researchers and future impact. We need this tool kit because AS-FAR as a model is challenging. We need tools to cut through complexity, working with it but not letting it become a barrier. As practitioners we accept imperfection and work with it to get to impact. I am not certain AS-FAR helps but the tools Elizabeth is developing might.

Stay tuned!

Questions for brokers:

1. You’re probably deep in the trenches helping to create impact. How do you assess it? Do you have a particular framework that guides you (and don’t say the Knowledge To Action Framework for all the reasons here).

2. Would REF work in Canada? We better figure this out because my bet is Canadian universities will be held accountable for impact at some point.

3. Do we have enough frameworks? Don’t we need more practical guidance on their use (and some tools would be nice)?

ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.

Knowledge Mobilization/Transfer and Immigration Policy: Forging Space for NGOs – the Case of CERIS – The Ontario Metropolis Centre

Shields, J., Preston, V., Richmond, T. Sorano, Y., Gasse-Gates, E., Douglas, D., Campey, J. and Johnston, L. (2015) Knowledge mobilization/transfer and immigration policy: Forging space for NGOs – the case of CERIS – The Ontario Metropolis Centre. Journal of International Migration & Integration, 16(2), 265–278.


The role of evidence-based knowledge and research in informing immigration and settlement policy is an important but under-examined area of inquiry. Knowledge for evidence-based policy-making is most likely to be useful to policymakers when it is produced collaboratively through sustained engagement between academic and non-academic stakeholders. This paper seeks to explore the role of non-governmental organizations in evidence/research-centred knowledge mobilization/transfer by a case study of CERIS—The Ontario Metropolis Centre, one of five immigration research centres in Canada that promoted partnerships to facilitate ongoing, systematic and timely exchange of social science knowledge. We explore the strategies and outcomes of establishing and maintaining relationships among academic researchers, representatives from non-governmental organizations and government policymakers. The experience at CERIS underscores the potential benefits from partnerships with non-governmental organizations that have detailed local knowledge of immigration and settlement issues and highlights the persistent challenges of funding and power imbalances that impede equitable and effective partnerships. The CERIS experience offers valuable insights into successful knowledge exchange from which the local, national and international immigration policy community can learn.

This article arises from a long standing government-academic-NGO collaboration on immigration and settlement. While the dedicated Metropolis funding has sadly dried up the legacy of the collaboration continues to generate impact for academic and non-academic partners. This article is about the role of NGO partners in KT/KM and illustrates not only what they get from a collaboration but what expertise they bring to the table. CERIS is itself a knowledge brokering organization – check out last month’s journal club post for more on institutional knowledge brokering.

This is a story about power and the “power imbalances that impede equitable and effective partnerships” – although on this very important point I would encourage the authors to go further than they did. The conclusion is correct, in my opinion, but I would like to have seen more in the analysis which was devoted mainly to the research symposia, see below.

The role of local NGOs is best summed up by the authors. “Locally, each centre was also required to establish partnerships with representatives from local NGOs that are service providers and advocates for newcomer populations. Since each centre was to concentrate on issues in its own region, community partners provided critical familiarity with local immigration and settlement issues and crucial contacts to facilitate the research, its dissemination and its use.” But even here the authors go one step further than working with NGOs for access to communities. Yet they modestly do not mention this in the article itself. Check out the authors.

Two of the authors are from the Ontario Council of Agencies Serving Immigrants. CERIS is predicated on a collaboration that involves non-academic partners throughout, including in governance, decision making, dissemination in peer reviewed literature and in the use of the evidence by NGO partners. It is this mutual involvement in all stages of the research to impact process that makes this an authentic partnership.

This is particularly justified in policy research. “It is virtually impossible to draw a straight-line link between research and policy decision. Rather, ongoing knowledge exchange in which all participants contribute to the identification of research questions, their investigation, the interpretation and presentation of research findings and their dissemination is most likely to result in relevant knowledge and its actual utilization in policy-making.”

In an authentic partnership explicit efforts are made to share power. Authentic partnerships inform the research that is conducted helping to co-create evidence that can have academic integrity and inform decisions about public policy and social services. As previously covered in this journal club it is engaged scholarship, not knowledge transfer, that helps get research evidence used by non-academic partners.

My only complaint about the paper is the authors set up this amazing collaboration with CERIS as a knowledge brokering organization. They go to great lengths to balance power differentials and then spend the majority of the analysis (4 pages) on the annual Community Research Symposium. I’m sure it’s a great symposium. But it’s only the public and most visible example of the collaboration that is CERIS. There is so much more to CERIS that accounts for its success as a knowledge brokering organization. In their own words, “In the end, KT and KM are about relationship building, not simply about doing and disseminating research.”

Questions for brokers

It’s not about academic supply of and community demand for knowledge. It’s about valuing the different yet complementary expertise of academics, policy makers, community partners and people with lived experience. How do you tell an academic researcher that s/he doesn’t know it all?

Assessing the impact of a symposium (a discrete activity) is far easier than assessing the impact of a long standing collaboration. What advice would you give John Shields and his colleagues in the event they are considering the next paper about assessing their efforts to balance power in the collaboration?

What methods do you use to balance power between academic and non-academic (especially NGO) partners?

ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.

Collective action for implementation: a realist evaluation of organisational collaboration in healthcare

Rycroft-Malone, J., Burton, C.R., Wilkinson, J., Harvey, G., McCormack, B., Baker, R., Dopson, S., Graham, I.D., Staniszewska, S., Thompson, C., Ariss, S., Melville-Richards, L., & Williams, L. (2016). Collective action for implementation: A realist evaluation of organisational collaboration in healthcare. Implementation Science, 11(17). DOI: 10.1186/s13012-016-0380-z


Background: Increasingly, it is being suggested that translational gaps might be eradicated or narrowed by bringing research users and producers closer together, a theory that is largely untested. This paper reports a national study to fill a gap in the evidence about the conditions, processes and outcomes related to collaboration and implementation.

Methods: A longitudinal realist evaluation using multiple qualitative methods case studies was conducted with three Collaborations for Leadership in Applied Health Research in Care (England). Data were collected over four rounds of theory development, refinement and testing. Over 200 participants were involved in semi-structured interviews, non-participant observations of events and meetings, and stakeholder engagement. A combined inductive and deductive data analysis process was focused on proposition refinement and testing iteratively over data collection rounds.

Results: The quality of existing relationships between higher education and local health service, and views about whether implementation was a collaborative act, created a path dependency. Where implementation was perceived to be removed from service and there was a lack of organisational connections, this resulted in a focus on knowledge production and transfer, rather than co-production. The collaborations’ architectures were counterproductive because they did not facilitate connectivity and had emphasised professional and epistemic boundaries. More distributed leadership was associated with greater potential for engagement. The creation of boundary spanning roles was the most visible investment in implementation, and credible individuals in these roles resulted in cross-boundary work, in facilitation and in direct impacts. The academic-practice divide played out strongly as a context for motivation to engage, in that ‘what’s in it for me’ resulted in variable levels of engagement along a co-operation-collaboration continuum. Learning within and across collaborations was patchy depending on attention to evaluation.

Conclusions: These collaborations did not emerge from a vacuum, and they needed time to learn and develop. Their life cycle started with their position on collaboration, knowledge and implementation. More impactful attempts at collective action in implementation might be determined by the deliberate alignment of a number of features, including foundational relationships, vision, values, structures and processes and views about the nature of the collaboration and implementation.

With the use of the term “collective action” I first thought (i.e. hoped) this article might be linking implementation science with collective impact (a theoretical framework from the social innovation literature) – now that would have been cool! Small moment of disappointment…nonetheless, this is an article about what organizations can do to create a culture of effective evidence use. The paper examined three CLAHRCs (Collaborations for Leadership in Applied Health Research in Care) which are funded by the UK government to expressly link academic research with clinical practice. They are essentially big experiments in implementation science. I like this paper since it focuses on organizations as opposed to individuals. The focus on organizations as opposed to individuals is an interesting contribution.

The paper argues that much practice has moved beyond creation, translation and transfer of knowledge products (i.e. clinical practice guidelines) to integrated or collaborative processes. “In theory, collaborations could blur this academic-practice boundary and the evidence would be co-produced within communities of practice, increasing the relevance to that community and its potential use.” This calls for organizational collaboration and the need for boundary spanning functions (like knowledge brokers, “these roles and individuals were an essential CLAHRC component”) to overcome barriers between research and practice organizations.

The authors conclude, “collective action in implementation might be determined by the deliberate alignment of a number of features, including foundational relationships, vision, values, structures and processes and views about the nature of the collaboration and implementation.” Wait…doesn’t this sound familiar?

Think about your own efforts at getting evidence into use. They probably also depend on factors like relationships, shared vision, values, structures and processes etc. At the end of the day these are same determinants of successful evidence use/knowledge mobilization/implementation derived from studies of individual interventions as well as this study at the organizational level. And this shouldn’t be a surprise since an organization is comprised of individuals so the characteristics of successful implementation at the individual level will translate up to the organizational level.

However, some of the recommendations are more applicable to organizations including: leadership; governance; and, investing in boundary spanners.

One key determinant was the “quality of existing relationships between higher education and health service”. Essentially, your implementation efforts will be more successful if you have pre-existing relationships among the partners because this basis the efforts on trust. “A history of working together catalysed collective action (and therefore, impacts) in a shorter time frame.”

This sounds obvious for individuals but possibly not for organizations. For some more thinking on organizational relationships read about the institutional knowledge mobilization collaboration between York University and United Way of York Region (now United Way Toronto & York Region).

Questions for brokers:

1. If the determinants of successful implementation are similar for individuals and organizations what might be different in how these play out (i.e. incentives, complexity, concepts of individual vs organizational collaborations; competencies and qualities)?

2. How might you apply organizational specific determinants to your individual efforts? I.e. how might you apply individual leadership; governance of your collaboration; and, boundary spanners to your implementation interventions?

3. If we agree that much practice has moved beyond creation, translation and transfer of knowledge products (i.e. clinical practice guidelines) to integrated or collaborative processes, why do we still see a focus on “bridging the gap” between evidence produced in research and its use in practice settings? Should we be closing the loop or some other metaphor that overcomes the siloes of research evidence and its use in practice? Why do we have repositories of “what works” when what works in your collaboration cannot be easily implemented in mine?

ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.

Top 5 Most Popular KMb Journal Club Posts of 2016

Here’s a recap of the five most read KMb Journal Club posts of 2016.

#1 – 184 views – 9 comments

Knowledge Translation Through Evaluation: Evaluator as Knowledge Broker

#2 – 121 views – 1 comment

Incorporating Community Engagement Language into Promotion and Tenure Policies: One University’s Journey

#3 – 98 views

Listening to the Stars: The Constellation Model of Collaborative Change

#4 – 77 views

Creating Research Impact: The Roles of Research Users in Interactive Research Mobilisation

#5 – 58 views – 1 comment

Reinventing the Civic University

Knowledge Translation Through Evaluation: Evaluator as Knowledge Broker

Donnelly, C., Letts, L., Klinger, D. & Shulha, L. (2014). Knowledge translation through
evaluation: Evaluator as knowledge broker. Canadian Journal of Program Evaluation, 29(1), 36–61 doi: 10.3138/cjpe.29.1.36


The evaluation literature has focused on the evaluation of knowledge translation activities, but to date there is little, if any, record of attempts to use evaluation in support of knowledge translation. This study sought to answer the question: How can an evaluation be designed to facilitate knowledge translation? A single prospective case study design was employed. An evaluation of a memory clinic within a primary care setting in Ontario, Canada, served as the case. Three data sources were used: an evaluation log, interviews, and weekly e-newsletters. Three broad themes emerged around the importance of context, efforts supporting knowledge translation, and the building of KT capacity.

I usually post a summary of an article I think makes a valuable contribution to the knowledge mobilization literature and hence the practice of knowledge mobilization. Not so in this case. This article creates false dichotomies between evaluators/evaluation and knowledge brokers/knowledge translation. This article might be news to evaluators but there is nothing new for knowledge brokers. Nonetheless, this article begs the question why is this news to evaluators and what can we do to let them realize they are already an important part of the various worlds of knowledge mobilization.

Here’s a quick summary of the article.

Evaluators want to evaluate knowledge translation.
Evaluators implement knowledge translation activities in a memory clinic.
Evaluators assess knowledge translation success.
Conclusion: evaluators can function as knowledge brokers, a new role for evaluators.

Duh. This is not a “drop the mic” moment for knowledge brokers.

If you ask an evaluator to undertake knowledge translation roles then they are knowledge brokers. If you asked a plumber to undertake knowledge translation s/he would be a knowledge broker.

I don’t see what this adds to the literature.

But this allows us to examine the intrinsically interlinked roles of knowledge translation and evaluation.

Every good knowledge broker establishes an evaluation of their knowledge translation intervention to assess if our work made a difference. Knowledge brokers are evaluators so it should come as no surprise that evaluators working in a knowledge translation study are knowledge brokers.

When planning a knowledge translation intervention we know that the impact of the intervention is the dependent variable (that thing we measure) and knowledge translation is the independent variable (that thing we change to observe an effect on the dependent variable). The two roles of knowledge broker (affecting the independent variable) and evaluator (assessing the dependent variable) are intrinsically linked.

Much evaluation happens along the way (measuring process indicators) and at the end (ex post measuring outcome indicators). But since knowledge brokers plan the knowledge translation (including how to evaluate it) at the beginning of the process then knowledge translation planning is ex ante research impact assessment.

See a blog I wrote about this September 2015.

When you connect the dots like this then knowledge mobilization/translation embraces evaluation and knowledge brokers can be evaluators and vice versa. The paper adds nothing new to this but it lets us step back and realize how knowledge translation and evaluation are intimately linked.

Questions for brokers:

1- Was this a moment of “doh, of course I knew that” or was this a revelation to you? Why and how different do you see your role now?
2- If knowledge brokers and evaluators both work on a spectrum from planning for impact to assessing impact why do we present them as an artificial duality with artificially distinct roles?
3- What can the disciplines of knowledge mobilization/translation and evaluation/research impact assessment do to create a shared space of “research impact practitioners” (thank you @JulieEBayley)?

ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.