Rycroft-Malone, J., Burton, C.R., Wilkinson, J., Harvey, G., McCormack, B., Baker, R., Dopson, S., Graham, I.D., Staniszewska, S., Thompson, C., Ariss, S., Melville-Richards, L., & Williams, L. (2016). Collective action for implementation: A realist evaluation of organisational collaboration in healthcare. Implementation Science, 11(17). DOI: 10.1186/s13012-016-0380-z
Background: Increasingly, it is being suggested that translational gaps might be eradicated or narrowed by bringing research users and producers closer together, a theory that is largely untested. This paper reports a national study to fill a gap in the evidence about the conditions, processes and outcomes related to collaboration and implementation.
Methods: A longitudinal realist evaluation using multiple qualitative methods case studies was conducted with three Collaborations for Leadership in Applied Health Research in Care (England). Data were collected over four rounds of theory development, refinement and testing. Over 200 participants were involved in semi-structured interviews, non-participant observations of events and meetings, and stakeholder engagement. A combined inductive and deductive data analysis process was focused on proposition refinement and testing iteratively over data collection rounds.
Results: The quality of existing relationships between higher education and local health service, and views about whether implementation was a collaborative act, created a path dependency. Where implementation was perceived to be removed from service and there was a lack of organisational connections, this resulted in a focus on knowledge production and transfer, rather than co-production. The collaborations’ architectures were counterproductive because they did not facilitate connectivity and had emphasised professional and epistemic boundaries. More distributed leadership was associated with greater potential for engagement. The creation of boundary spanning roles was the most visible investment in implementation, and credible individuals in these roles resulted in cross-boundary work, in facilitation and in direct impacts. The academic-practice divide played out strongly as a context for motivation to engage, in that ‘what’s in it for me’ resulted in variable levels of engagement along a co-operation-collaboration continuum. Learning within and across collaborations was patchy depending on attention to evaluation.
Conclusions: These collaborations did not emerge from a vacuum, and they needed time to learn and develop. Their life cycle started with their position on collaboration, knowledge and implementation. More impactful attempts at collective action in implementation might be determined by the deliberate alignment of a number of features, including foundational relationships, vision, values, structures and processes and views about the nature of the collaboration and implementation.
With the use of the term “collective action” I first thought (i.e. hoped) this article might be linking implementation science with collective impact (a theoretical framework from the social innovation literature) – now that would have been cool! Small moment of disappointment…nonetheless, this is an article about what organizations can do to create a culture of effective evidence use. The paper examined three CLAHRCs (Collaborations for Leadership in Applied Health Research in Care) which are funded by the UK government to expressly link academic research with clinical practice. They are essentially big experiments in implementation science. I like this paper since it focuses on organizations as opposed to individuals. The focus on organizations as opposed to individuals is an interesting contribution.
The paper argues that much practice has moved beyond creation, translation and transfer of knowledge products (i.e. clinical practice guidelines) to integrated or collaborative processes. “In theory, collaborations could blur this academic-practice boundary and the evidence would be co-produced within communities of practice, increasing the relevance to that community and its potential use.” This calls for organizational collaboration and the need for boundary spanning functions (like knowledge brokers, “these roles and individuals were an essential CLAHRC component”) to overcome barriers between research and practice organizations.
The authors conclude, “collective action in implementation might be determined by the deliberate alignment of a number of features, including foundational relationships, vision, values, structures and processes and views about the nature of the collaboration and implementation.” Wait…doesn’t this sound familiar?
Think about your own efforts at getting evidence into use. They probably also depend on factors like relationships, shared vision, values, structures and processes etc. At the end of the day these are same determinants of successful evidence use/knowledge mobilization/implementation derived from studies of individual interventions as well as this study at the organizational level. And this shouldn’t be a surprise since an organization is comprised of individuals so the characteristics of successful implementation at the individual level will translate up to the organizational level.
However, some of the recommendations are more applicable to organizations including: leadership; governance; and, investing in boundary spanners.
One key determinant was the “quality of existing relationships between higher education and health service”. Essentially, your implementation efforts will be more successful if you have pre-existing relationships among the partners because this basis the efforts on trust. “A history of working together catalysed collective action (and therefore, impacts) in a shorter time frame.”
This sounds obvious for individuals but possibly not for organizations. For some more thinking on organizational relationships read about the institutional knowledge mobilization collaboration between York University and United Way of York Region (now United Way Toronto & York Region).
Questions for brokers:
1. If the determinants of successful implementation are similar for individuals and organizations what might be different in how these play out (i.e. incentives, complexity, concepts of individual vs organizational collaborations; competencies and qualities)?
2. How might you apply organizational specific determinants to your individual efforts? I.e. how might you apply individual leadership; governance of your collaboration; and, boundary spanners to your implementation interventions?
3. If we agree that much practice has moved beyond creation, translation and transfer of knowledge products (i.e. clinical practice guidelines) to integrated or collaborative processes, why do we still see a focus on “bridging the gap” between evidence produced in research and its use in practice settings? Should we be closing the loop or some other metaphor that overcomes the siloes of research evidence and its use in practice? Why do we have repositories of “what works” when what works in your collaboration cannot be easily implemented in mine?
ResearchImpact-RéseauImpactRecherche is producing this journal club series as a way to make the evidence and research on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read the article. Then come back to this post and join the journal club by posting your comments.