Quantitative Evaluation (and a little shameless self-promotion) / Évaluation quantitative (et un peu d’autopromotion éhontée)

By David Phipps, RIR-York

Amanda Cooper (@ACooperKMb) recently released her evaluation of 44 Canadian Research Brokering Organizations. She presents a quantitative method for evaluating the effort of a system of knowledge mobilization.

Amanda Cooper (@ACooperKMb) a récemment dévoilé son évaluation de 44 organisations canadiennes de courtage de recherche. Elle présente une méthode d’évaluation quantitative visant à mesurer les efforts d’un système de mobilisation des connaissances.

Knowledge mobilization struggles with evaluation.  Evaluating an individual instance of knowledge mobilization is feasible with the right base line and pre/post intervention metrics. But rolling that up and evaluating a system of knowledge mobilization (like any one of the knowledge mobilization units in the ResearchImpact-RéseauImpactRecherche network) has so far proven challenging.

So thank you, Amanda Cooper (Assistant Professor in the Faculty of Education at Queen’s University, Kingston, Ontario). Amanda recently posted a report titled “Knowledge mobilization in education: A cross-case analysis of 44 research brokering organizations across Canada”. Amanda developed a quantitative methodology to evaluate the efforts of Canadian research brokering organizations (RBOs). The methodology is based on the evidence about research utilization. We know that people centred methods encourage greater research use than do those based solely on making package knowledge accessible to decision makers. In the words of Sandra Nutley and her colleagues in Using Evidence, “[p]ersonal contact is crucial … studies suggest that it is face-to-face interactions that are most likely to encourage policy and practice uses of research” (page 74).  In Amanda’s methodology points are assigned depending on how the RBO employs products (12 points), events (20 points) and networks (20 points) as well as overall features (20 points). You can see that more points are assigned to people centred methods (events and networks) than are assigned to purely product based methods. How points are assigned is detailed in Appendix B of her report.

Amanda used RBO’s web sites as the data source and scored each of the 44 RBOs on a scale out of 100. Amanda cites ResearchImpact as one of the RBOs but the data she used pulled from York’s Knowledge Mobilization Unit. The Harris Centre, another RIR member, is also included separately as one of the 44 RBOs.

Key Point #1: This is a quantitative methodology that is reliable and reproducible citing satisfaction with the inter-rater reliability testing of the tool and the average intra-class correlation coefficient.

Key Point #2: This method evaluates a system of knowledge mobilization not the efficacy of an individual knowledge mobilization intervention.

Key Point #3: This method measures the efforts of Canadian RBOs. It does not measure impact of the RBOs efforts. That more effective RBO efforts will result in greater impact of those efforts is a testable hypothesis, but it makes sense that this would be the case.

Key Point #4 (shameless self-promotion alert): RIR-York achieved the highest score in this study.

Each with a score of 81%, RIR-York tied with the Fraser Institute and Canadian Education Association as the top performing RBOs. Fraser Institute achieved this score with a budget of $12.8M. CEA achieved this score with a budget of $2M. York’s budget for knowledge mobilization is approximately $250,000. RIR-York accomplished the same effort on a fraction of the budget. The data from the top nine ranked RBOs is presented below.











Size (FTE)



Score on KMb Matrix (%)


1.2.1 RI

NfP, university research centre



Small (3)

$250 000


1.2.4 Fraser

NfP, think tank


Large (60)



1.4.2 CEA

Memb, network


Small (9)




1.2.4 AIMS

NfP, think tank


Small (5)

$872 234



1.2.0 CCL

NfP, general


Large (77)



1.2.3 The Centre

NfP, issue-based


Large (25)




1.2.0 TLP

NfP, general


Large (74)



1.2.1 HC

NfP university research centre



Med (11)



1.2.0 CCBR

NfP, general


Med (12)



1.1.2 E-BEST

Gov, district level


Small (6.5)



1.2.1 CEECD

NfP, university research centre



Small (9)


1.2.2 P4E

NfP, advocacy


Small (9)



1.2.3 LEARN

NfP, issue-based


Large (33)




1.2.1 HELP

NfP, university research centre



Large (50)




1.1.3 CSC

Gov, standards


Large (20)



We need more research like this into the processes of knowledge mobilization, engaged scholarship and community based research. Much of what we know comes from individual studies of individual instances of knowledge mobilization. As these activities become more embedded in institutions and systems we will increasingly need research on these systems and how they create infrastructure to support the individual instances. You can read more on other methods for evaluating the impact of research like Payback and Productive Interactions in a 2011 Special Edition (Volume 20, Number 3) of the journal, Research Evaluation.

Thank you to Amanda for your important contributions to this emerging field.

Leave a Comment