Indicators at the Interface: Managing Policymaker-Researcher Collaboration

This month’s journal club is written by Anneliese Poetz, Manager of the Knowledge Translation Core for NeuroDevNet, a Network of Centres of Excellence undertaking research, training and knowledge translation focused on childhood neurodevelopmental disorders.
Kothari, A., MacLean, L., Edwards, N. & Hobbs, A. (2011). Indicators at the interface: Managing policymaker-researcher collaboration. Knowledge Management Research & Practice, 9, 208-214. http://www.palgrave-journals.com/kmrp/journal/v9/n3/pdf/kmrp201116a.pdf
Abstract
The knowledge transfer literature encourages partnerships between researchers and policymakers for the purposes of policy-relevant knowledge creation. Consequently, research findings are more likely to be used by policymakers during policy development. This paper presents a set of practice-based indicators that can be used to manage the collaborative knowledge creation process or assess the performance of a partnership between researchers and policymakers. Indicators for partnership success were developed from 16 qualitative interviews with health policymakers and researchers involved with eight research transfer partnerships with government. These process and outcomes indicators were refined through a focus group. Resulting qualitative and quantitative indicators were judged to be clear, relevant, credible, and feasible. New findings included the need to have different indicators to evaluate new vs mature partnerships, as well as specific indicators common to researcher-policymaker partnerships in general.
Indicators are one small part of the broader concept of ‘evaluation’. Evaluation takes various perspectives (formative versus summative), and its associated measurements can be either direct or indirect, quantitative or qualitative. Several methods and tools are used in evaluation, including checklists, and indicators which are ideally mapped onto a framework such as a logic model. Logic models outline the goals/objectives of the program or intervention (or whatever is being evaluated) as well as associated outputs, outcomes and impacts. Impact can be thought of as “change” or “what is different as a result of doing what we did”.
This article is about a set of outcome and process indicators that were developed to evaluate partnerships for research. In addition to breaking new ground in an underdeveloped area, this paper is interesting because the reported indicators are based on stakeholder input – in essence, partnerships were used to develop these partnership indicators. The main thing I learned from this article is that a different set of indicators is needed depending on the stage of the relationship; new relationships are evaluated in a different way than more mature ones. The paper contains 3 tables: “common partnership indicators” (communication, collaborative research, dissemination of research), “early partnership indicators” (research findings, negotiation, partnership enhancement), and “mature partnership indicators” (meeting information needs, level of rapport, commitment). Within each of these tables (and associated categories) several indicators and sub-indicators are listed; different indicators for different stages of the partnership.
However, for me, there were a few ‘holes’ in the paper as it was presented. First, there was no mention of an evaluation framework or logic model that would be the ‘glue’ to tie their indicators together. More importantly, a framework or logic model would link the indicators to associated goals/objectives and outcomes, and only after the project had moved through the stages of goal setting and achieving outcomes would the indicators measure how well (or if) the goals had been achieved.
Typically I think of indicators as a measure of something. I believe the authors intended for the set of indicators they developed to provide a means of measuring partnerships over time. In addition to the indicator ‘title’, however, you need to think about (and articulate) for each indicator, a comprehensive definition including domains such as:

  1. what is included/excluded ,
  2. the rationale for creating and using each indicator,
  3. numerator and denominator (for calculating percentages) if applicable,
  4. the type of indicator (input, output, outcome, process),
  5. strengths and limitations of the indicator definition and data collected to measure it,
  6. data source(s) (e.g. patient records, services provided, evaluation forms collected),
  7. the person or organization responsible for collecting the data,
  8. the date of the last revision of the indicator definition (since it will likely change over time).

By articulating a comprehensive definition, it allows you to be able to compare across partnerships, as well as across time. Although the particular indicators are different for each stage of the partnership, what if the partnership remains in one stage for months or years? In that case, the data collected for indicators in the first month (year) can be compared to subsequent months (years) and may provide useful insight into the evolution and even quality of the partnership.
This paper is limited in the sense that it only reports on the indicator titles. In addition, while the paper reports that they created “process and outcome” indicators, the reader is left to guess which of the listed indicators fall within which category. But the biggest problem for me, is that the ‘indicators’ appear to be more like ‘checkpoints’ on a checklist. Consider as an example (from page 208):
“1.0 Communication is clear
            1.1 Communication is on-going
            1.2 Communication involves face-to-face meetings as well as telephone, mail, email and fax methods
1.3 The same contact people continue over the life of the project
1.4 A common language/lexicon is used by both parties”
Essentially, each of the above represent checkpoints and sub-checkpoints – you either do them or you don’t. In the abstract, the authors say the “indicators were judged to be clear, relevant, credible, and feasible” but curiously there was no mention of a requirement that they be measurable. I struggled to envision how you could define them using the list of indicator definition components I stated above, and collect information beyond either “yes we did this” or “no we didn’t”. I don’t think it’s possible, which is a big part of the reason why I believe they are checkpoints and not indicators.
Still, it is a form of evaluation. As mentioned above, checklists are used for evaluation. Even the process for constructing these (checkpoints) follows established standards and processes for creating a checklist, such as stakeholder consultation to determine checkpoints and their respective categorization and ordering. The only thing missing is a determination of their relative weighting in terms of importance. Nevertheless, the work done by these authors can be valuable as an evaluation of whether certain components or processes exist in your partnerships and which ones still need to be incorporated.
Questions for brokers

  1. What do you consider to be an “indicator”? Why?
  2. What are the components of a complete indicator definition? Choose one of the indicators presented in this paper and write a complete definition for it according to these components?
  3. A checklist represents one form of evaluation, while indicators are another. In which situation(s)/context(s) would you use a checklist as opposed to a set of indicators?
  4. What do you believe is the best way to evaluate partnerships?

ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.