Knowledge Translation Through Evaluation: Evaluator as Knowledge Broker

Donnelly, C., Letts, L., Klinger, D. & Shulha, L. (2014). Knowledge translation through
evaluation: Evaluator as knowledge broker. Canadian Journal of Program Evaluation, 29(1), 36–61 doi: 10.3138/cjpe.29.1.36

http://www.evaluationcanada.ca/secure/29-1-036.pdf

Abstract

The evaluation literature has focused on the evaluation of knowledge translation activities, but to date there is little, if any, record of attempts to use evaluation in support of knowledge translation. This study sought to answer the question: How can an evaluation be designed to facilitate knowledge translation? A single prospective case study design was employed. An evaluation of a memory clinic within a primary care setting in Ontario, Canada, served as the case. Three data sources were used: an evaluation log, interviews, and weekly e-newsletters. Three broad themes emerged around the importance of context, efforts supporting knowledge translation, and the building of KT capacity.

I usually post a summary of an article I think makes a valuable contribution to the knowledge mobilization literature and hence the practice of knowledge mobilization. Not so in this case. This article creates false dichotomies between evaluators/evaluation and knowledge brokers/knowledge translation. This article might be news to evaluators but there is nothing new for knowledge brokers. Nonetheless, this article begs the question why is this news to evaluators and what can we do to let them realize they are already an important part of the various worlds of knowledge mobilization.

Here’s a quick summary of the article.

Evaluators want to evaluate knowledge translation.
Evaluators implement knowledge translation activities in a memory clinic.
Evaluators assess knowledge translation success.
Conclusion: evaluators can function as knowledge brokers, a new role for evaluators.

Duh. This is not a “drop the mic” moment for knowledge brokers.

If you ask an evaluator to undertake knowledge translation roles then they are knowledge brokers. If you asked a plumber to undertake knowledge translation s/he would be a knowledge broker.

I don’t see what this adds to the literature.

But this allows us to examine the intrinsically interlinked roles of knowledge translation and evaluation.

Every good knowledge broker establishes an evaluation of their knowledge translation intervention to assess if our work made a difference. Knowledge brokers are evaluators so it should come as no surprise that evaluators working in a knowledge translation study are knowledge brokers.

When planning a knowledge translation intervention we know that the impact of the intervention is the dependent variable (that thing we measure) and knowledge translation is the independent variable (that thing we change to observe an effect on the dependent variable). The two roles of knowledge broker (affecting the independent variable) and evaluator (assessing the dependent variable) are intrinsically linked.

Much evaluation happens along the way (measuring process indicators) and at the end (ex post measuring outcome indicators). But since knowledge brokers plan the knowledge translation (including how to evaluate it) at the beginning of the process then knowledge translation planning is ex ante research impact assessment.

See a blog I wrote about this September 2015.

When you connect the dots like this then knowledge mobilization/translation embraces evaluation and knowledge brokers can be evaluators and vice versa. The paper adds nothing new to this but it lets us step back and realize how knowledge translation and evaluation are intimately linked.

Questions for brokers:

1- Was this a moment of “doh, of course I knew that” or was this a revelation to you? Why and how different do you see your role now?
2- If knowledge brokers and evaluators both work on a spectrum from planning for impact to assessing impact why do we present them as an artificial duality with artificially distinct roles?
3- What can the disciplines of knowledge mobilization/translation and evaluation/research impact assessment do to create a shared space of “research impact practitioners” (thank you @JulieEBayley)?

ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.

10 Responses to “Knowledge Translation Through Evaluation: Evaluator as Knowledge Broker”

written by Hilary On 5 December 2016 Reply

As a knowledge broker perhaps this isn’t new because we run the gamut of planning through and including evaluation and implementation. This might be newer for evaluators as often those in a sole evaluation role do not engage in the planning of activities across the continuum of KMb? So there is a discrepancy within/across the roles — if evaluators are not involved in planning, but they are involved in evaluation and assessing the impact of implementation, then there is indeed a distinction in the cases where evaluators are called in to just evaluate. I think that if an evaluator is involved in the whole gamut, then yes, they are knowledge brokers. If they are just involved in consulting for the evaluation stage, then maybe they really are just evaluators? Calling in those who have the particular title or skill in evaluation (without themselves having knowledge or perception of knowledge mobilization?) then that will create shared space. Otherwise, I think that knowledge brokers = evaluators but not all evaluators see themselves as knowledge brokers? Or not at all times? Therefore shared spaces for or as research impact practitioners is explored through the evaluation stages and backwards mapping to the planning of activities and forward mapping to implementation, dissemination and use? (grabbing at straws a little bit here as I think the distinctions are quite subtle…)

written by David Phipps On 6 December 2016 Reply

love how you’ve used the term “research impact practitioners”. Julie Bayley and I imagine this to be that big bucket of all those who contribute to making and assessing impacts so it would include evaluators of research impacts whether or not they see themselves also as knowledge brokers.

written by Alison Hoens On 6 December 2016 Reply

Evaluation has been established as a key function for knowledge brokering in multiple models of knowledge brokering – see J Neurol Phys Ther. 2016 Apr;40(2):115-23. doi: 10.1097/NPT.0000000000000122. Role Domains of Knowledge Brokering: A Model for the Health Care Setting. Glegg SM1, Hoens A. or view the video abstract at https://youtu.be/udp8JNu_tL4

written by Janet Harris On 6 December 2016 Reply

1) You’ve noted that this article adds little to the literature but I think we need to ask: Which literature? KT is one strand; participatory evaluation a second; and impact evaluation a third. Many people are familiar with one strand and the fact that the article brings them together may be helpful for them.
1) People who do summative evaluation may be pressured to adopt a particular design and methods, leaving little room for negotiating the aim of the evaluation at the start of the project. They are in effect given a distinct role by the funder. Some may work as brokers but the role won’t be explicitly recognised by those who are paying for the data.
3) The shared space needs to be created at the beginning of an evaluation, when evaluators have a responsibility to educate funders on the importance of designing with and for end users. The article makes this point.

written by David Phipps On 6 December 2016 Reply

100% agree on bringing evaluators into a shared space from the beginning. That is the preferred model by the International School on Research Impact Assessment. Planning evaluation in at the beginning is, as Hilary mentions above, a role for knowledge brokers. We should strive to remove the binary distinction of KT Planning and Evaluation and recognize the spectrum of activities is inter related throughout.

written by Julie Bayley On 6 December 2016 Reply

1- Was this a moment of “doh, of course I knew that” or was this a revelation to you? Why and how different do you see your role now?
No not a revelation, and no there’s no sense of role difference for me, nor would i say it’s a new role. I do find the paper interesting in its sense of ‘feedback’ rather than ‘evaluation as a bolt on’, but I find the artificial roles here make it hard to follow ‘what’s new’. I’ve read the paper and *think* i understand that the evaluator was also continually engaged with the stakeholder groups, acting as (in effect) an intermediary and thus performing knowledge broker functions. I do welcome the transparency between the functions being performed; from a UK perspective, when impact was introduced into REF2014, we had to recognise that despite much engaged work, we’d not really evaluated/tracked effects well enough to build strong cases. That has since changed, but I do continue to find value in dissecting the elements of KMb to connect them more transparently (as is done here)

2- If knowledge brokers and evaluators both work on a spectrum from planning for impact to assessing impact why do we present them as an artificial duality with artificially distinct roles?
Pragmatically, we need to to some extent (jobs, functional roles etc), but in my opinion we need to split them functionally (plan vs. evaluate) rather than by arbitrary job titles. In my experience, the skills associated with (eg.) brokering do not necessarily mean one can (eg.) evaluate. Also if the broker is also the evaluator, there is a real risk of bias; most rigorous evaluation methods require objectivity, or at least reflexive and epistemologically transparent subjective methods if necessary. Embedding an evaluator in ongoing engagement can bias participants’ responses. In short, the evaluator becomes part of the thing being evaluated and raises cautions over validity of results. This may be more of an issue for academic evaluations, but nonetheless raises intriguing ethical considerations.

3- What can the disciplines of knowledge mobilization/translation and evaluation/research impact assessment do to create a shared space of “research impact practitioners” (thank you @JulieEBayley)?
(Welcome!) The shared space must be the goal, with clear sense of pathways (how) and skills (who) needed to achieve and evaluate (what). The ‘Research impact practitioners’ term is part of a call to recognise the multitude of skills and activities within this space under a single banner. There are different competencies at play across the spectrum, but we should be working out how they overlap/can be connected rather than carving out discrete job titles. We need evidence informed practice, but in equal measure we need the people based skills to interact and ‘mobilise’ knowledge towards effect. These roles may or may not be undertaken by the same person, and we need to decide if the rigours of academic reflexivity are needed here. Fundamentally, if we work out how the jigsaw pieces (functions) piece together to get us to the finished puzzle (impact), and how interactional variables may influence the situation under evaluation, then maybe we get closer to coordinated practice.

written by David Phipps On 6 December 2016 Reply

Julie wins for most syllables in one thought for “reflexive and epistemologically transparent subjective methods”.

But the comment about objectivity is interesting. I argue we need to move away form the binary and create shared spaces along the spectrum from planning and evaluating. The comments are agreeing. But then do we have a concern about objectivity? And if the only way to address objectivity is to distance the evaluator from the planner then we have returned to the binary.

I think this needs a cafe and beer to stimulate some broker brainstorming.

written by Remare On 6 December 2016 Reply

It is worthwhile to clarify two roles of evaluation in relation to KT.
First, all evaluations are expected to utilize evidence to inform decision making which constitutes a KT function. However, the evaluand or intervention is not typically the responsibility of the evaluator (to avoid bias). In other words, evidence about a program/intervention/etc is made available for translation by the evaluator to improve processes/activities and/or outcomes.
Second, knowledge mobilization interventions/programs (when these do not constitute evaluations) can be evaluated, with the program managers (KT practitioners) not serving as evaluators.

The article shows an example of the second role but with elements and expectations of the first role.

1. The statement that “Evaluators implement knowledge translation activities in a memory clinic” is not an accurate description of what the article reports. The authors could have done a better job of providing a clearer program context to avoid this conclusion by readers. Evaluators were NOT the implementers of the memory clinic intervention.
2. I agree that the article does not offer much in this literature space that is new.
3. Research impact assessment refers to a broad range of evaluative processes and functions related to research, but which definitely includes both targeting research related KME/KT processes as evaluands and using KT activities to enhance the achievement of research impact through the utilization of evidence obtained from the RIA.

written by David Phipps On 6 December 2016 Reply

The bias comment comes up again, recapitulating what Julie Bayley commented above but then possibly in opposition to Hilary’s comment that all knowledge brokers should be planning (and undertaking?) evaluation of their intervention. If an evaluator is part of the research impact practitioner team can s/he be unbiased? Similarly is s/he is not part of the team but evaluating an arms length intervention then is there a risk of missing some context dependent observation?

I don’t have answers but I am enjoying the questions.

written by scardonamusic.com On 28 July 2017 Reply

So while the capacity of the original primary health organization did not appear to be enhanced within the time frame of this study, individuals who were involved began to see themselves as having a responsibility for carrying over what they had learned through evaluative inquiry into their new settings.

Leave a Comment