#ShitDavidSays About Impact #7: If Impact Occurred but No One Was There to Measure It… / #ShitDavidSays About Impact, no 6 : s’il y a un impact, mais que personne n’est là pour le mesurer…

If impact occurred but no one was there to measure it did anything ever really happen? In this 7th and final post in this series, David speaks about the importance of assessing research impacts because if we don’t how can we demonstrate the value of research to end beneficiaries? He points out the irony of asking researchers to report on impacts in end of grant reports.

Si la recherche produit un impact, mais que personne n’est là pour le mesurer, est-ce qu’on peut dire qu’il a vraiment eu lieu ? Dans ce 7e et dernier billet de la série, David parle de l’importance d’évaluer les impacts : comment prouver la valeur des recherches aux utilisateurs finaux si l’on n’a rien mesuré ? Il souligne aussi l’ironie qu’il y a à demander aux chercheurs de faire état des impacts dans leurs rapports finaux.


In Canada, we are developing a culture of creating impacts. This is evident through grant applications that require a knowledge translation (CIHR, health charities), knowledge mobilization (SSHRC) or commercialization (NSERC) strategy. SSHRC also requires an outcomes statement that is a prediction of the difference the funded project will make on Canadians. As identified by the Canadian Federation of Humanities and Social Sciences, this could be an impact on scholarship and training as well as an impact on the economy, society and culture or public policy.

But if we don’t assess the impacts beyond scholarship and training how can we fulfil these obligations in our grant applications?

The UK Research Excellence Framework (REF) 2014 collected 6,679 case studies of research impact and assessed them by panels of academic and non-academic experts in 36 Units of Assessment (i.e. examples of impact arising from history were not compared to impacts arising from chemistry). Research on the REF identified 3,709 unique pathways to impact.


Let me say that again. 6,679 stories of impact and 3,709 different ways to make impact.

That’s right. There is no cookie cutter approach to either creating or assessing impact. In my closing address to C2UExpo 2017, I pointed out how hard it is not only to plan for impact but also to assess impact but being hard was no excuse not to do either. I observed that we didn’t give researchers tenure so they could do something easy.

Get out there an assess the impact of your research because if you don’t then did you ever make a difference to anyone other than your academic colleagues? The UK has done it for the whole post secondary system, surely there’s something we can do in Canada?

Well yes there is, thank you for asking.

If you are effectively planning your impact strategy you are therefore also planning your impact assessment. Knowledge translation planning is ex ante research impact assessment. If you plan your impact you are inherently identifying the processes needed to move from research to impact including the indicators and data sources that will be the evidence of impact. Adapting a generic logic model of research impact (like the co produced pathway to impact) to your specific case will help guide your efforts for impact planning and impact assessment.

Research Impact Canada (RIC) is piloting a research impact assessment tool adapted from the REF. This tool provides a semi structured interview guide that creates consistency for collecting the evidence of impact and a case study template to create consistency for expressing the evidence of impact.

But here’s the thing about impact assessment…recall who is actually making the impact? SSHRC’s 2013 evaluation of their knowledge mobilization programs showed that it was primarily the partners of SSHRC funded projects that had the evidence of impact, not the researchers. This makes sense since it is the partners, not the researchers, who are making the products (industry), developing the policies (government) or delivering the social services (community) that have an impact on local and global citizens.

You need to use the interview guide in the RIC tool to gather the evidence of impact from partners and end users. And you need to do that long after the project has finished since the impact hasn’t usually happened within the course of a funded research project.

HEY FUNDERS…if we need to collect the evidence of impact from partners long after the project has ended why do you always ask researchers to report on impacts in their end of grant reports?

To return to the question in the title of this post, if a partner uses the evidence produced in a research project to help make impact but no one was there to collect the evidence of impact long after the grant ended then did anything ever really happen?

Go into the forest and see if that tree really did fall.

#ShitDavidSays About Impact #6: Impact Is Measured at the Level of the User / #ShitDavidSays About Impact, no 6 : l’impact se mesure chez l’utilisateur

Probably the most important thing David says. Researchers don’t make impact, partners do. So why do we ask researchers to report on impact?

C’est sans doute la chose la plus importante que dit David. L’impact ne se produit pas au départ, chez les chercheurs, mais à l’arrivée, chez les partenaires. Pourquoi, dans ce cas, demandons-nous aux chercheurs d’évaluer l’impact ?

beneficiaryWait…what…researchers don’t make any impact? Of course they do. Researchers publish papers and build capacity by graduating students. Isn’t’ that impact? According to the Canadian Federation of Humanities and Social Sciences report on assessing impacts, research can contribute to building knowledge and building capacity but it can also contribute to cultural, economic, policy and social impact. It’s these latter impacts, those beyond the academy that are of interest to knowledge mobilization. But who really creates those impacts?

Not researchers.

Researchers don’t make products, industry does. Researchers don’t develop public policy, government does. And researchers don’t deliver social services, community organizations do. So if we want research to have an impact on local and global citizens we do so by supporting researchers collaborating with partners from private, public and non-profit sectors.

Think about it…it’s not researchers making those impacts.

“Now hold on, just a minute there…what about clinical research?” Sure, your research in a medical clinic, in a classroom or in a social work setting will definitely benefit those patients, students or clients. But the challenge of research in practice based settings is to scale it beyond your clinic or your classroom. How can your research based intervention scale through your district, province, country or globally? Whose job is that? Again, probably not the researcher who published the paper. That’s where school boards, clinical practice associations, colleges regulating professional practice and even unions play a role.

SSHRC found this in their 2013 evaluation of their knowledge mobilization funding programs. SSHRC wanted to collect the evidence of impact of the projects they funded. They had three data sources:

1. End of grant reports: no evidence of impact since the impacts usually hadn’t happened
2. Interviews with researchers: few had any knowledge of impact since they weren’t the one’s making it
3. Interview with partners: only when SSHRC interview partners did they find the evidence of impact

In one case, provincial tax law changed because of a SSHRC funded collaboration between a researcher and a policy maker but the researcher knew nothing about the impact beyond the scholarly publications and the graduated students. Because it wasn’t the researcher who was making tax law, it was the government partner.

So if funders want evidence of impact why do they continue to ask researchers to complete impact assessment reports? According to their website, Researchfish is “leading the world in research impact assessment”. “Over 100K researchers report to their funders the outcomes, outputs and impact of their research into Researchfish” Why? If researchers are not making impact why are they the ones reporting on it?

Impact needs to be measured at the level of the end user. Go ahead and ask a researcher what s/he thinks happened. But don’t forget to also ask the partner organization.

Intrinsically linked to this is the way academic research funding traditionally is managed. Academic research funding seeks to create impact beyond the academy. If we accept that it is our partners making the ultimate impact, then why must most academic research funding be managed by the academic institution of the principle investigator? By holding onto money academic institutions hold onto power in the research to impact collaboration.

It gets even more perverse when we require our non-academic partners to commit cash and in-kind resources to the project. Not only do we need you to make the impact and we can’t pay you for your role but we expect you to pay your own way while the funder pays for my participation.

Not an equitable partnership at all.

In a SSHRC world a researcher can share funding with a non-academic partner but only if s/he is made a co-applicant instead of being relegated to a second-class partner or collaborator status. However, to be a co-applicant the partner needs a SSHRC or Canadian Common CV.

If an academic researcher wants to create an authentic (i.e. equitable) partnership (that isn’t about supply and demand of knowledge) with a non-academic partner then help the partner make a CV and make them a co-applicant. That’s what we did in KMb York when we partnered with United Way York Region on a CIHR KT grant and a SSHRC Public Outreach Grant. We transferred 75% of the funding to UWYR and they hired the project coordinator and directed the project because we made a CV for the CEO UWYR.

Share money. Share power. Make authentic partnerships that will fund activities from research to impact.

#ShitDavidSays About Impact #5: Knowledge Hypocrites / Les idées de David sur l’impact, no 5 : l’hypocrisie en MdC

On February 1, 2012, David first wrote about knowledge hypocrites. The challenge that we are all knowledge hypocrites is as true today as it was almost 6 years ago.

Le 1er février 2012, David signait un billet au sujet de l’hypocrisie en mobilisation des connaissances. Presque six ans plus tard, son énoncé provocant selon lequel nous sommes tous des hypocrites de la MdC est toujours aussi vrai.

GoGo the cat sitting on a copy of the book Using Evidence


We are all knowledge hypocrites. I am a knowledge hypocrite. You probably are one also.

There is an evidence base to knowledge mobilization but are you reading it? Are you using it? Sandra Nutley (recently retired from the Research Unit for Research Utilisation at St. Andrew’s University in Scotland) wrote Using Evidence with her colleagues Huw Davies and Isabel Walters in 2007. For me this is a foundation text providing a deep and wide ranging review of the literature on how research is used to inform public services.

And it is still relevant today…because let’s face it…there really are no “eureka” moments in our field. We have learned much in the last 10 years but it is built on a foundation largely crafted by Sandra and her colleagues that she built on a foundation of pioneers in the field.

Have you read Evidence & Policy? You should. Sandra, Huw and their colleague Alison Powell told you so this year in a paper in Evidence & Policy (vol 13, no. 2: 201-213) titled “Missing in Action: the role of knowledge mobilisation literature in knowledge mobilisation practice”. They surveyed knowledge intermediary organizations to see who is basing their practice on the literature. They found we aren’t. We continue to be knowledge hypocrites.

What I mean by this is that KT/KMb researchers advocate that researchers make their research accessible in different formats and to actively facilitate the uptake of evidence in the context of its use…but they don’t (usually). There are few incentives and rewards for KT/KMb researchers to come to York’s KMb Unit and help us use their evidence in our practice. I recall one conversation I had with a KMb researcher in education. She never considered me an end user of her research even though I read all her work. She only thought that teachers were her end users.

But practitioners are also knowledge hypocrites. We tell policy and practice partners to engage with the evidence and reach out for research expertise in their field….but we don’t (usually). We often have neither the skills nor the time to read academic papers on KT/KMb. I try to give my KMb team one day/month to sit in the library but it always falls off their agenda because they are busy getting the job done. At performance review I don’t measure them on the number of articles they have read. I have not created incentives or rewards for them. I am a knowledge hypocrite.

That was the driver behind the Knowledge Mobilization Journal Club. In July 2011, I started a monthly on line journal club where I would post a summary of an academic article and make observations about the implications for knowledge mobilization practice. There are currently 67 journal club posts. It is a small attempt to close the loop between the scholarship and the practice of knowledge mobilization.

When we evaluated the journal club in 2016, we found it was highly valued by readers (“please keep it going even if we have to clone David”.

To over come the shame of being a knowledge hypocrite we need to build our skills for knowledge mobilization. My colleague Julie Bayley (@JulieEBayley) and I have recently published a competency framework for practitioners of research impact. Building our skills in both creating impacts (“how”) and in assessing impacts (“what”) will help us all build our research impact literacy, a concept that Julie and I are also building (see below).

So here’s my question: how will you build your impact literacy to avoid being a knowledge hypocrite?

Impact Literacy diagram

#ShitDavidSays About Impact #4: Impact Frameworks Are Like Toothbrushes… / Les idées de David sur l’impact, no 4 : les cadres d’évaluation de l’impact sont comme les brosses à dents…

With thanks to Karen Ritchie, Head of Knowledge and Information, Health Improvement Scotland, who first coined this phrase. This post examines the plethora of impact frameworks and their – usually inappropriate – use.

Merci à Karen Ritchie, chef du service des connaissances et de l’information de l’organisme écossais Health Improvement, qui a forgé cette métaphore. Ce blogue s’intéresse à la pléthore de cadres, structures et méthodes d’évaluation de l’impact et à l’usage – généralement inadéquat – qui en est fait.

“Impact frameworks are like toothbrushes. Everyone has one and no one wants to use anyone else’s”.

Co-Produced Pathway to Impact

Co-Produced Pathway to Impact

“Impact frameworks are like toothbrushes. Everyone has one and no one wants to use anyone else’s”.

Knowledge to Action Cycle, Canadian Academy of Health Sciences Impact Assessment Framework, Payback method, Co-produced pathway to impact (CPPI), SPIRIT Action Framework, etc., etc., etc.

See a recent review of the strengths and weaknesses of some of these models here.

In Canada, the KTA Cycle dominates. Many networks, programs and projects cite the KTA Cycle as their framework without knowing that the KTA authors themselves never expected It to be used in whole by any single organization. In a review of papers citing KTA, only 10/146 actually implemented a portion of it and only one employed KT methods to move from one stage to the next.

No pathway is perfect which is why everyone creates a new pathway or new modification to a pathway to solve the one thing that doesn’t work for them despite the many things that do work.


But with a plethora of pathways – a veritable plentiful profusion of pathways – how does one go about choosing a pathway that’s right for your research to impact project? NIHR asked me this in 2016 and I came up with the following five criteria for impact pathway assessment (as published in this blog on May 5, 2016).. Does the pathway:

1. Accommodate and enable collection of evidence for patient benefit?

2. Support engagement of end users (incl. patients, policy, service providers) throughout?

3. Work at the level of the project, the program, the organization, the system?

4. Enable planning by providing general logic informing specific adaptation?

5. Drive uptake/adoption?

In the May 5, 2016 post, I reviewed three pathways: KTA, Payback and the CPPI. Acknowledging bias as I am the author of CPPI (yes, even I made another damn framework!), the CPPI came out on top on these five criteria.

But here’s the thing about any pathway. It is at best generic. No framework can be specific to every project implementing the framework. The CPPI can be used to monitor the progress from biomarker identification to successful clinical microarray test as it can be used to monitor the progress from understanding needs of at risk youth to successful implementation of a life skills training program. Clearly these two pathways will be very different. At York in 2016 we had supported 121 large scale grant applications of which 42 (35%) had been successful attracting $47M in external research income. Each one had a different pathway to impact.

Of the 6,679 impact case studies in the UK Research Excellence Framework there were 3,709 unique pathways to impact (see here).

With this diversity, clearly even the best impact frameworks can only be generic. The best advice any funder can give is generic (for example the guidance on knowledge mobilization strategies from SSHRC). It is up to the researchers, partners and the research impact practitioners who support them to use planning tools to develop a specific (or bespoke as @JulieEBayley likes to say) impact pathway for every research to impact project.

Since almost all grant applications require some form of impact pathway seek out your local research impact practitioner to help secure your next research grant.

#ShitDavidSays About Impact #3: Engaged Scholarship NOT Knowledge Transfer / Les idées de David sur l’impact, no 3 : On parle d’érudition engagée, pas de transfert de connaissances

David Phipps is writing about his lessons after more than a decade of impact. This third post encourages us to engage end users/beneficiaries as we move from knowledge transfer to engaged scholarship.

David Phipps partage les leçons qu’il a apprises en plus de dix ans dans le milieu de l’impact. Ce troisième billet nous incite à faire participer activement les utilisateurs finaux ou les bénéficiaires au mouvement, ce qui nous fait passer du simple transfert des connaissances à l’érudition véritablement engagée. Les détails à #ShitDavidSays About Impact.

Cambridge College

This is a picture of a college at the University of Cambridge. This image recapitulates the traditional scholarly orientation where knowledge (and power) is kept within the university and is disconnected from the external world. There is a single door way and gate that separates those who have knowledge and those who don’t (see the previous post about the knowledge of our non-academic partners). If we wanted to share information it would have to be translated and/or transferred to those who don’t have this knowledge. And for 600 years we have been working this business model.

But it needs to change.

I have heard from colleagues at the Rick Hansen Institute (spinal cord injury research) that when the Institute was being established they asked stakeholders about their priorities. Clinicians prioritized biomarkers and neuroimaging. People living with spinal cord injury and their families prioritized bladder control and erectile dysfunction.

If we’re not talking to those directly affected by the research then we are producing research knowledge that won’t be used (and that’s ok for basic/fundamental research). This includes talking to end users who use the research evidence for new products, policies and services and those end beneficiaries who will benefit from them. Bowan and Graham wrote in 2013 that the failure to bridge the knowledge to action gap was not a failure of knowledge transfer but a failure of knowledge production.

Let me say that again….it is not a failure of knowledge transfer but a failure of knowledge production. We need to stop trying to transfer knowledge end users/beneficiaries don’t want and start working on research they do want by practicing engaged scholarship.

This means that if we want our research to be used to inform products, policies and services we need to engage non-academic stakeholders at least to inform the research but also as collaborators as we seek to co-produce research evidence with them. Engagement is a necessary precursor to impact. You can engage without having impact but you can’t have impact without engaging. However, metrics of engagement are not a proxy for impact – think about that Australia as you run your Engagement and Impact Assessment pilot.

And one final critical piece that derives from the PARIHS framework. We know that making evidence accessible (i.e. on a website) is necessary but not sufficient to inform change (thank you Sandra Nutley). If you want your research evidence to be used you have to facilitate the uptake of the evidence in the context of its use. Once you have practiced engaged scholarship to produce your useful evidence don’t just send it to end users. Go to end users and actively facilitate the uptake of the evidence in the context of the end users. You can do this by giving a workshop with end users and/or by training end users in your new method/tool.

Get out of your academic research spaces and listen to end users/beneficiaries. Collaborate with them along the way. And then return to them and facilitate the uptake of evidence in their contexts.

That’s engaged scholarship not knowledge transfer.

Stay tuned as these seven posts about #ShitDavidSays about impact roll out. And if you want to see a webinar on #ShitDavidSays about Impact you can pay to attend a webinar sponsored by the Canadian Association of Research Administrators at noon Eastern on November 10, 2017. More info available here.

#ShitDavidSays About Impact #2: It’s Not About Supply and Demand / Les idées de David sur l’impact, no 2 : Ce n’est pas une question d’offre et de demande

David Phipps is writing about his lessons after more than a decade of impact. This second post recognizes that academics aren’t the only ones who do research. Knowledge mobilization isn’t about supply and demand of knowledge, it’s about finding complementary expertise.

David Phipps partage les leçons qu’il a apprises en plus de dix ans dans le milieu de l’impact. Dans ce deuxième billet, il reconnait que les universitaires ne sont pas les seuls à faire de la recherche. Mais la mobilisation des connaissances ne concerne pas l’offre et la demande en matière de connaissances – il s’agit en fait d’apparier des expertises complémentaires. Les détails à #ShitDavidSays About Impact.

"Community research is a mile wide and an inch deep while academic research is an inch wide and a mile deep" David PhippsThe first grant application that seeded the York U-UVic knowledge mobilization partnership on was written as a technology transfer application geared to the social sciences. All we needed to do was package up the excellent research at our universities, send it out and magically someone would use it. We had lots of dissemination strategies all predicated on universities having knowledge that someone else could use.

It never occurred to us that they might not want it. It never occurred to us to ask them what they wanted. We had the knowledge supply and they had a demand for our knowledge.


In fact, within months our community partners, York Region District School and the Human Services Planning Board of York Region, asked us to stop pushing our research on them. After a few more conversations we needed to move to a “pull” model where we responded to the needs of our non-academic partners.

Non-academic organizations do research. Industry does applied research to turn ideas into products. Governments do research so policy decisions are based (in part) on evidence. Community organizations do research to understand their communities so that services are aligned with the needs of citizens.

Knowledge mobilization isn’t about supply and demand. It is less about transferring knowledge (although this is also important) and more about understanding needs to enable co-producing collaborations based on complementary expertise.

One of the first conversations we have at KMb York when we are seeking a researcher to speak with a non-academic partner is, “remember, you don’t know it all”. Academic researchers have one type of knowledge. It is valuable. But so is the knowledge and expertise in community, industry, governments, and especially in those with lived experience.

If an academic researcher can’t appreciate the value of other forms of knowledge and expertise we will celebrate and support their excellent academic scholarship. But that doesn’t make them an excellent partner for a knowledge mobilization opportunity.

There are three conditions that need to be satisfied to make a good knowledge mobilization opportunity:

1- When the research is “right”: when the research has the potential to have a life inside a company making a product, a government making a policy or a community organization delivering a service.

2- When the researcher is “right”: we are not only seeking an excellent researcher but an excellent researcher who appreciates s/he doesn’t know it all.

3- When the partner has the capacity to participate authentically: Industry (usually if a large corporation) and government (often) have embedded research capacity. Community organizations do research but on a very tight budget (time, money, other resources). How can we in the academy help build capacity (i.e. make time) for our partners to participate in an authentic manner.

Knowledge mobilization is facilitated when these three conditions are met.

Stay tuned as these seven posts about #ShitDavidSays about impact roll out. And if you want to see a webinar on #ShitDavidSays about Impact you can pay to attend a webinar sponsored by the Canadian Association of Research Administrators at noon Eastern on November 10, 2017. More info available here.

#ShitDavidSays About Impact: A 7-Part Blog Series / #ShitDavidSays About Impact : Un miniblogue en 7 billets pour savoir ce qu’en dit David

After more than a decade of building systems of research impact at York University and across Canada with the Research Impact Canada network David Phipps has learned a thing or two (actually…six things) about impact. Each form a fundamental of impact. Put them together and this is some of the #ShitDavidSays about impact

Après avoir passé plus de dix ans à mettre sur pied des systèmes d’amplification de l’impact de la recherche, à l’Université York et dans tout le Canada avec le Réseau Impact Recherche, David Phipps a appris une ou deux petites choses (six, en fait) sur le sujet. Chacune est fondamentale pour l’impact de la recherche. Prises ensemble, ça donne euh… les idées de David sur l’impact, pour le dire gentiment. Vous les trouverez ici : #ShitDavidSays.

I was recently invited to open the New Zealand Rehabilitation Conference where the theme was the impact rehab research can have on rehab practice. I have no expertise in rehab research, practice or impact so I needed to keep the story high level but make it relevant to the NZ rehab context. I needed to share some big concepts and illustrate them with examples from practice. I was inspired by an amazing talk given by Dr. Mae Jemison who spoke at the National Council of University Research Administrators (NCURA) annual conference in Washington DC in August 2017. Dr. Jemison was the first African American woman to travel in space when she went into orbit aboard the Space Shuttle Endeavour on September 12, 1992. Her talk was great, but her presentation was my inspiration. She showed a single picture with a single quote or statement and then spoke about that slide.

Pictures. Few words. Lots of stories.

That’s what I imagined as I reflected on my decade plus work in impact [sidebar: actually, I have been involved in impact since the mid-1990s when we identified a possible marker of HIV infection during my post doctoral research. I learned the craft of technology commercialization and joined the University of Toronto Innovations Foundation.] As I developed my slides “After a decade of impact…” I realized I was going a very long way for this talk, so why not make it even more memorable.

I asked conference chair Nicola Kayes if I had to be terribly serious as I opened her conference. After consulting with her program committee, I got permission to change the title to “#ShitDavidSays about Impact”.

In the next six posts of this series I will present some themes that have permeated my work over the years. The headlines for each post are:

• It’s not about supply and demand: Who has what knowledge and the importance of acknowledging power in our research collaborations

• Engaged scholarship NOT knowledge transfer: Dissemination is necessary but not sufficient to create change

• Impact frameworks are like toothbrushes: What are the important elements of any impact framework and how to adapt them to your context

• We are all knowledge hypocrites: There is a science underpinning knowledge mobilization and impact and how are we (or aren’t we) using it?

• Impact is measured at the level of the user: Who really makes impact and where/when do we measure it?

• If impact occurred but no one was there to measure it…: the importance of impact assessment and some stuff related to the UK Research Excellence Framework

Stay tuned as these roll out for details on these six themes. And if you want to see a webinar on #ShitDavidSays about Impact you can pay to attend a webinar sponsored by the Canadian Association of Research Administrators at noon Eastern on November 10, 2017. More info available here.

Reimagining the Concept of a Commonwealth University

This week’s blog post first appeared on The Association of Commonwealth Universities’ blog and is reposted here with permission.

As I have been thinking about a blog for our Association of Commonwealth Universities (ACU), I got a bit stuck on the concept of commonwealth itself. If you look at definitions for the word commonwealth, you will find that they largely refer to relations between states, such as with our own use of the Commonwealth referring to former members of the British Empire. When one looks a bit further, one finds under the label ‘archaic’, the 14th Century origins of commonweal or commonwealth referring to the common good. The common good is the idea of sharing the bounties among people in an equal or just manner.

What would the concept of a Commonwealth university really mean if we were to refer back to the original meaning for the common good? It would support the idea of the social responsibility of universities. It would support the concept of community university engagement for another. And it would align very closely with the objectives of the United Nations Sustainable Development Goals (SDGs), wouldn’t it? The preamble of the Transforming Our World statement, which introduces the SDGs, declares “We are resolved to free the human race from the tyranny of poverty and want and to heal and secure our planet. We are determined to take the bold and transformative steps which are urgently needed to shift the world onto a sustainable and resilient path”. These are goals and ambitions that speak to a deeper understanding of our commonwealth, of recognition of our common place on the planet.

Budd Hall

Author: Budd Hall

A true commonwealth university would share a number of characteristics. First it would be an engaged university, engaged in the way the ACU has articulated in the past: “Engagement implies strenuous, thoughtful, argumentative interaction with the non-university world in at least four spheres: setting universities’ aims, purposes, and priorities; relating teaching and learning to the wider world; the back-and-forth dialogue between researchers and practitioners; and taking on wider responsibilities as neighbours and citizens” (Engagement as a Core Value for the University: A Consultation Document, ACU, 2001).

Secondly, it would be a decolonising university because it would be seeking to recover the rich bodies of knowledge that colonialism and the domination of the western canon has covered over, obscured or in other ways attempted to erase. Third, and growing out of both of the first two characteristics, it would be a place that recognises and celebrates the fact that knowledge is created in community organisations and social movements, among other places. And in this spirit, it would support the co-creation of knowledge on themes that originate in communities themselves. Fourth, a commonwealth university would be a place of action. Students in legal clinics would work on behalf of Indigenous Peoples to fight dangerous extractive industries seeking to ruin the environment. Coalitions of community groups and business school academics would support housing co-ops, community economic development, local food sales and production. The possibilities are endless.

The exciting part of reimagining the commonwealth university is that in larger and smaller ways this is a movement that is already happening. Decolonisation is on the minds of students all over the world. Contributing to the SDGs is in the minds of university leaders and government funding agencies. Examples of the co-construction of knowledge can be found in many places. And exciting actions abound. Moving towards the vision of a commonwealth university means paying less attention to university rankings for example and more to those people in your community who have been excluded historically from the common good. If we do not take passionate care of the common good, then the private good will have little meaning.


Budd Hall is a Steering Committee member for ACU’s Engage Community. Professor Hall is Joint UNESCO Chair in Community-Based Research and Social Responsibility in Higher Education and a Professor of Community Development at the University of Victoria, Canada. His most recent books are: Knowledge, Democracy and Action: Community-University Research Partnerships in Global Perspectives (Manchester University Press), Learning and Teaching Community-Based Research (U of T Press), Higher Education and Community-Based Research (Palgrave-MacMillan) and Knowledge, Engagement and Higher Education’s Contribution to Social Change – with R Tandon and C Esgrigas (Palgrave-MacMillan).

Systems of Engagement

This week’s post first appeared on The Association of Commonwealth Universities’ blog and is reposted here with permission.

Most writing on community-campus engagement focuses on individual projects and practices. This makes sense since most practices are employed at the project level, but what about systems of engagement? Individual projects sit within institutional and community systems. Institutions and communities sit within sector or regional/national systems. Where are these systems of engagement?

David PhippsAs I increasingly engage internationally with like-minded knowledge mobilisers/brokers and impact practitioners (we are a diverse lot!), I am impressed with the networks already working at a system level. Here are some examples (there are certainly more) with brief descriptions from their websites:

Community Based Research Canada: Their intent is to build an inclusive and open network, engaging already existing networks, to build support for community-campus partnerships, community-based research and community engagement.

Development Research Uptake for Sub Saharan Africa (DRUSSA): Funding has ended but DRUSSA was a network of 24 universities building capacity for research uptake.

Engagement Australia: The main objective is to lead and facilitate the development of best practice university-community engagement in Australia. This is done through creating inclusive forums for discussion and development of engagement, promoting practice, fostering awareness, building capacity and developing resources.

Global University Network for Innovation: Their mission is to strengthen the role of higher education in society, contributing to the renewal of the visions and policies of higher education across the world, under a vision of public service, relevance and social responsibility.

Knowledge into Practice Learning Network: Founded in 2016, KIPLN Network is an international online learning network dedicated to sharing advice, expertise, and resources to help people get better at using knowledge to inform practice.

Living Knowledge Network: an international network of science shops, which perform science projects responding to civil society’s needs for expertise and knowledge.

National Alliance for Broader Impacts (US): The goal of NABI is to create a community of practice that fosters the development of sustainable and scalable institutional capacity and engagement in broader impacts activity.

National Coordinating Centre for Public Engagement (NCCPE): A national (UK) network that helps universities and the public engage with each other.

Research Impact Canada: A network of universities across Canada investing in support for campus-based knowledge mobilisation to maximise the economic, social, health, cultural and environmental impacts of research.

And don’t forget the ACU Engage Community: an international network of university staff and stakeholders from member universities, who are working or involved in university community engagement and outreach, including public engagement staff, industrial liaison officers, research managers and communications officers, and those specialising in distance or open learning.

One conclusion that can be drawn is that engagement is a global phenomenon, with international networks and national networks existing in both industrialised and developing countries.

How can these system-wide networks help your individual practice? Research on networks shows that membership brings benefits of legitimisation and reduced transaction costs. You can phone up folks around the world to find out their practices, attend international conferences to meet people and learn about their work and practices, and join a network and reduce your costs of obtaining this information.

Networks also enhance the scaling up of promising practices. In a recent example, McMaster University (Hamilton, Ontario) learned about the ResearchSnapshot clear language research summary method from Research Impact Canada. They adapted this format to their own context and produced research snaps that went on to win a national award from the Social Sciences and Humanities Research Council Award of Excellence for communications. This is a clear example of reduced transaction costs.

An additional, and possibly more important, benefit of membership in any of these systems of community-campus engagement is the membership itself. Join a network of like-minded practitioners and you have found your tribe. Many of us work as solo practitioners (see some literature on this). Finding a tribe helps us feel connected to our work, to other practitioners and ultimately will help us become better practitioners, as the McMaster example shows.

What does this mean for the ACU Engage Community? While the Community is a network of ACU members, these members are likely to belong to other national and international networks. We should work to seek out the benefits members derive from other networks and systems, and use this knowledge to inform the work of the Engage Community.

The national and international networks above are just examples. This list is limited by my own experience. What other networks and systems of community-campus engagement can you add? How do you think these could contribute to the Engage Community?

Netherland’s Research Impact Assessment Exercise / Exercice d’évaluation de l’impact de la recherche aux Pays-Bas

The UK has the Research Excellence Framework. Australia launched the Engagement and Impact Assessment exercise. And the Netherlands has the Standard Evaluation Protocol. Canada can learn from these and from the Research Impact Canada network as we implement our own tool for research impact assessment.

Le Royaume-Uni s’est doté d’un cadre pour l’excellence en matière de recherche, le Research Excellence Framework. L’Australie a mis en place un exercice d’évaluation de la participation et de l’impact dans ce domaine, l’Engagement and Impact Assessment. Et les Pays-Bas disposent d’un protocole d’évaluation normalisé, le Standard Evaluation Protocol. Le Canada peut tirer des enseignements de ces modèles et exploiter le Réseau Impact Recherche qui existe déjà au pays afin de mettre en œuvre son propre outil d’évaluation de l’impact de la recherche.

There is increasing global interest in creating socioeconomic impacts from academic research. National networks such as Research Impact Canada and the National Alliance for Broader Impacts (US) invest in methods to create impacts but neither have national systems of impact assessment. The UK and Australia have national research impact assessment (RIA) exercises but no formal structures to create impact.

And new for me is the Netherland’s RIA process called the Standard Evaluation Protocol (no snappy title points for the Dutch…maybe it suffered in translation). Every six years NLD research institutions are required to self-assess and present to external committees on the research from the past six years and plans for the subsequent six years. The assessment returns a rating of unsatisfactory, good, very good or excellent. The submission, the committee report and the institutional response are posted on line creating public accountability.

The assessment reviews research quality, relevance to society and viability (the “extent to which the organization is equipped for the future”).

For readers of this blog the relevance to society will be of greatest interest. Go straight to Appendix D Table D1 which provides a selection (not an exhaustive list) of indicators for societal impact:

Demonstrable products: reports (for example for policymaking); articles in professional journals for non-academic readers; instruments, infrastructure, datasets, software tools or designs that the unit has developed) for societal target groups; outreach activities, for example lectures for general audiences and exhibitions.

Use of products: Patents/licences: use of research facilities by societal parties; projects in cooperation with societal parties; contract research

Marks of recognition: public prizes; valorisation funding; number of appointments/positions paid for by societal parties; membership of civil society advisory bodies

The SEP submissions are reviewed by committee assessing the narratives of research quality and societal relevance. This is similar to the REF. A significant difference is the committee review happens as a site visit to the submitting unit. This face to face element of the assessment creates greater opportunities for evaluation than an arm’s length committee assessing a submission as in the REF.

What is also similar to the REF and the Australian pilot is that the method and the indicators are predicated on the academic research institution describing the impact of the research. But we know that it isn’t the researchers who are making the products, developing the policies or delivering the services that have an impact. Research partners from the private, public and non-profit sectors make the products, policies and services are the ones making the impact. Yet we ask the research institution to step in and tell someone else’s story of impact. That’s ok so long as the indicators come from the non-academic partners; however, the indicators in the SEP all of which are academic centric.

How long before Canada jumps on the research impact assessment (RIA) bandwagon? Alberta Innovates is implementing the Canadian Academies of Health Sciences’ RIA framework. The co-produced pathway to impact is being implemented by some of the Networks of Centres of Excellence including Kids Brain Health Network, MEOPAR, AllerGen, Cell Can and PREVNet who helped conceptualize the pathway. However, these are pathways that help to guide the progress from research to impact. They are not research impact assessment protocols.

Research Impact Canada is undertaking an RIA pilot which we riffed off the REF as explained in Mobilize This! on April 12, 2017. We have used our RIA tool on one example of impact from York’s Knowledge Mobilization Unit. Based on that experience we revised the interview questions we derived from Sarah Morton’scontribution analysis. We are revising the guidelines and will develop it as an RIA tool that can be used along your pathway to impact, not just ex post research impact assessment (at the end).

When Canada is ready for a national impact assessment process we will be ready with a validated tool. But Canada, please call us first. Let us help you develop a Canadian research impact assessment exercise.

Universities Create Evidence but Can We Also Use It? / Les universités produisent des données scientifiques, mais savent-elles s’en servir ?

Researchers in higher education (HE) institutions produce lots of research based evidence. When that evidence is about higher education how good are our HE leaders at gathering, synthesizing, assessing and implementing evidence for HE policy and practice? Do they know they need help to do this?

Les établissements d’enseignement supérieur sont la source de nombreuses recherches fondées sur des données scientifiques. Quand ces données concernent l’enseignement supérieur lui-même, dans quelle mesure les dirigeants des établissements réussissent-ils à les rassembler, à les synthétiser, à les évaluer et à les intégrer aux pratiques et politiques ? Sont-ils conscients qu’ils ont besoin d’aide pour y parvenir?

LFHE logoThe Leadership Foundation for Higher Education (LFHE) is “committed to developing and improving the management, governance and leadership skills of existing and future leaders of higher education.” They have had a long standing interest in the impacts of HE. This includes a recent assessment of the UK Research Excellence Framework impact case studies from legal, governance and management research. You can read more about that study on our Knowledge Mobilization Journal Club. Their latest endeavour concerns a “What Works” centre on HE evidence use. The UK gov’t has sponsored seven What Works centres on topics ranging from education to clinical practice to aging and more.

To help frame their thinking about a What Works centre for HE they performed a quick survey of What Works centres and international KMb organizations, a deep dive into one What Works centre and interview 17 leaders in HE and knowledge mobilization including me and John Lavis (McMaster Health Forum) providing an international perspective. While statements from interviewees are in the report John and I are the only ones quoted in the report which was published on July 20, 2017.

What we said is in the report but two things stand out:

1. The focus of the What Works centre will be on the HE institution with HE leaders as the primary focus. Check out a couple of recent KMb journal clubs here and here on institutional perspectives of KMb.

2. The report highlights the need for the What Works centre in HE to achieve impact on HE practices and policies.

But in this I encourage the authors to go a little further. They quote John Lavis speaking about the need to end a policy dialogue with next steps and assign action items. I recommend they do active follow up to support the uptake of the evidence. We know from models like PARIHS that evidence needs to be facilitated in the context of its use in order to create the conditions for effective evidence use. Bailing on the end users once you disseminate the evidence will not facilitate its uptake. Active facilitation needs to happen in the context (i.e. on site) of its use.

Don’t just send evidence to HE leaders. Do workshops with stakeholders to help them learn the evidence (=uptake). Help stakeholders evaluate the evidence to facilitate implementation into new HE policies and practices. Help stakeholders assess the impact of the evidence on those policies and practices.

Dissemination is necessary but not sufficient to support impact.

I will repeat that because it is key: dissemination is necessary but not sufficient to support impact

LFHE then undertook an ideas lab session to design elements of a successful What Works centre. This ideas lab identified three desirable features of a What Works centre for HE. These include the following and my comments on each:

A knowledge map that would help connect knowledge needs with the people who hold the knowledge

• Maps are hard to keep current and do not easily capture emerging knowledge needs. At York’s Knowledge Mobilization Unit we do not rely on codified knowledge maps but on knowledge brokers who know those who have expertise in demand. Speaking of knowledge brokers….

Impact Champions – boundary spanners who would work for the knowledge sharing system appointed for their skills and expertise in line with specific knowledge needs suggested by the system

• Knowledge brokers = Impact champions (although without a cool name!). Like the knowledge brokers connecting the Research Impact Canada (RIC) institutions impact champions will need to be embedded within networks of academic and non-academic experts to enable connections. LFHE needs to support networks of champions, researchers and end users.

A digital dating system which could be developed in the future as an adjunct to the knowledge map to support the impact champions and their work

• Again, RIC has something to contribute to the LFHE What Works centre. Yaffle.ca performs exactly this function for Memorial University of Newfoundland and we are exploring it as a platform for RIC. LFHE should look to Yaffle as an existing platform and reach out for an introduction. Why re-invent it when you can build on almost 10 years of experience with Yaffle.

Final observation is that the What Works centre should not be predicated on a knowledge supply and demand model. Leaders of HE have their own expertise that needs to be leveraged to implement the evidence in the context of its use. It’s not that HE researchers or the What Works centre has knowledge and HE leaders need knowledge. It’s more about finding the fit between complementary expertise.

The RIC network has much to share to help LFHE in their efforts. It’s not that RIC has knowledge and LFHE doesn’t. It’s that we have certain experiences and expertise that might be complementary to their own experiences and expertise.

That’s mobilizing knowledge about knowledge mobilization.

Knowledge Mobilization Advice From SSHRC / Les recommandations du CRSH concernant la mobilisation des connaissances

Knowledge Mobilization advice from a research funder is necessarily generic but the advice provided by SSHRC is a great starting point for grant applicants to begin to craft a specific knowledge mobilization strategy. Just don’t leave it to the last day to start!

Les recommandations des organismes de subventions concernant la mobilisation des connaissances (MdC) sont nécessairement générales. Celles du CRSH, toutefois, fournissent aux candidats un point de départ solide pour commencer à mettre sur pied une stratégie de MdC. Mais n’attendez pas à la dernière minute pour commencer!

strategyAnyone completing a SSHRC grant application needs to develop a knowledge mobilization strategy. For those fortunate enough to be at a Research Impact Canada university help is close at hand. But for everyone else SSHRC has provided some advice.


SSHRC starts out by underselling their advice. They speak about how the guidelines will help with dissemination of research: “to whom should research results be communicated; how is the process of communicating research results best mapped”. But there is little on dissemination and much on engaged methods of knowledge mobilization.

Don’t get me wrong. Communicating research results to end users/beneficiaries is critically important but it is not enough. We know from the research on research use that making research results accessible is necessary but not sufficient to change behaviour (read anything by Sandra Nutley from Research Unit for Research Utilization). We need to engage with end users to identify their needs so researchers work on what is important to stakeholders not just what researchers think is important (see this article and a stakeholder engagement report by Kids Brain Health Network).

And beyond stakeholder engagement which identifies research priorities we need to practice engaged methods of dissemination and co-production. SSHRC provides a number of examples of this in their advice to grant applicants:

• Meetings with knowledge users, especially at the outset of the project, are an effective vehicle for forging strong and lasting connections.

• When building relationships with organizations, build links across multiple levels, from front-line, program and policy staff to executives.

• To produce knowledge mobilization products that meet users’ needs, researchers can use or repackage existing materials, or develop new ones, in concert with the users and their identified needs.

• Larger projects typically employ a project co-ordinator. The use of knowledge brokers, who have specific skill sets, can be effective.

• Ultimately, the more proactive and multifaceted the approach researchers take with users, the more successful and durable the relationship.

• Successful projects often adopt more than one outreach medium in their knowledge mobilization plan.

• All research teams, but especially those engaging in co-production of knowledge, should outline at the outset of projects the roles and responsibilities of all participants to ensure the voices of all team members, including partners, are represented at all stages of the project.

These are great examples covering the gamut from engaged priority setting to engaged dissemination to engaged co-production of research. Kudos to SSHRC for these.

But here’s the limitation of this advice. It is only generic. Like the impact advice provided by the Research Councils UK, advice from funders to applicants can only be generic. How an applicant in the history of English theatre will mobilize knowledge is different than how an economist working in sustainable business practices will mobilize knowledge. But both need to be informed by the advice from SSHRC.

Applicants need to take the generic advice and develop a specific (“bespoke” as my UK colleagues like to say) knowledge mobilization plan for their grant application. You do need to meet with knowledge users (first bullet in the list above) but which knowledge users, when will you meet, how will you recruit them, and what pre-existing relationships will you build on? This level of specificity is needed for your knowledge mobilization strategy.

As we recommend in our recent publication about supporting knowledge mobilization and impact strategies in grant applications you need to start with a generic impact pathway (like the co-produced pathway to impact) and generic advice (above) and use your own research, stakeholders, activities, partners and indicators to develop a specific impact pathway and specific knowledge mobilization plan.

There is no cookie cutter approach. Don’t leave this section to the day before the application is due. The research plan and the knowledge mobilization/impact plan need to be writer concurrently so each will support the other.

And for help call your local knowledge mobilization practitioner – oh yeah – if you’re at a Research Impact Canada member university!

Connecting Impact Pathways to Actual Impacts / Raccorder la trajectoire à l’impact

Researchers are crafting impact strategies in grant applications. Are they getting any help from their universities and their institutional research administrators?

Dans leurs demandes de subvention, les chercheurs mettent au point des stratégies d’impact. Reçoivent-ils de l’aide pour ce faire de la part de leur université et des administrateurs de la recherche?

Fast Track Impact logoMore from the world of impact in the UK, this time a reflection on a post by Mark Reed and Sarah Buckmaster from February 2016. Sarah and Mark compared the impact pathways from research teams who had been awarded the highest scores for impact in the Research Excellence Framework 2014. For more on REF 2014, see this journal club and last week’s post.

The seven studies presented span health, social sciences and humanities with impacts on policy, professional practice and culture – this diversity suggests the 10 common elements of impact pathways are not unique to any discipline or sector. The 10 common elements are: clear connectivity from overall vision to objectives and impact; specificity; tailor made impact; build in flexibility; assign responsibility – name names; demonstrate demand; highlight collaborative partnerships; don’t ignore sensitivities; think long term; record everything.

I’m not going to go into detail in each of these because Mark and Sarah have done that in their post.

ARMA logoWhat I will reflect on is the role of the university helping researchers craft these specific impact pathways in their applications. ARMA – the association supporting university research administrators (those people who are hired to help you craft your grant applications) – has a specific group interested in impact. It is not just the job of the grant applicant to ensure impact strategies incorporate these 10 key success elements. It is also the job of institutions to support researchers crafting their grant applications. How many ARMA members receive specific training not only as REF officers collecting the evidence of impact but also in supporting impact strategies in grant applications? This list of 10 key success elements could form a checklist for ARMA members to use to not only assess strategies at application review before submission but also to build capacity of researchers before they start writing the application (a new product idea for Fast Track Impact – you can thank me later, Mark).

At York University (Toronto, Canada), we have published on our process for supporting impact in grant applications. We also lead Research Impact Canada, a network of 12 universities building capacity to support impacts of research. We don’t have a formal impact assessment process like the REF but most Canadian funding programs require the equivalent of impact pathways. Because of this requirement we are sharing tools and building expertise to support impact at the institutional level. This is only now coming onto the radar of CARA (the Canadian ARMA) with an impact planning and assessment workshop I am delivering on May 7 at the CARA annual conference.

It would be interesting to ask the authors of these highly successful impact strategies what support they received from their institution during the grant application process. This would demonstrate if there is existing impact expertise in research administrators or if there is a skills gap and an opportunity for institutions to invest in capacity building to support impact which, in turn, will support success in the REF. It is a little late to start to build capacity to support impact in an application that won’t be funded until 2018 at the earliest and therefore won’t likely contribute to impacts in REF 2021. But Mark and Sarah advocate thinking long term. REF 2026 is just around the corner, at least in terms of impact which can take years after the funded grant project to manifest.

And don’t forget to call Canada. We are happy to share our supports for impact in grant applications and look forward to learning from UK experts as well. CARA and ARMA are already collaborating on accreditation for research administrators. Maybe impact could be part of this exchange. Just ask @JulieEBayley.

Watching Impact in the REF and How It Informs the Canadian Context / Le REF en observation : comment l’impact s’y manifeste, et son influence sur la situation canadienne

The Research Excellence Framework is a system wide research assessment exercise that includes assessment of the various non-academic impacts of research. As the UK prepares for REF 2021 Research Impact Canada is piloting impact assessment in Canada. Not because of any reporting requirement but because we should understand and communicate the impacts we are making. It’s the right thing to do.

Au Royaume-Uni, le Research Excellence Framework est un exercice d’évaluation de la recherche appliqué à l’ensemble du système d’enseignement supérieur, qui prévoit l’évaluation des nombreux impacts de la recherche en dehors de l’université. Tandis que ce pays prépare son REF de 2021, au Canada, le Réseau Impact Recherche réalise son propre projet pilote d’évaluation de l’impact. Non pas parce qu’une autorité quelconque nous l’impose, mais parce que comprendre et communiquer les effets que nous provoquons… c’est ce qu’il faut faire, tout simplement.

For 10 years Research Impact Canada (RIC) has been leading the development of institutional knowledge mobilization practices that create the conditions to maximize the social, economic and/or environmental impacts of university research. Our vision statement is:

We will maximize the impact of university research for the social, cultural, economic, environmental, and health benefits across local and global communities.

If we say we will maximize impact we needed to figure out a way to assess the impact of the research we were helping to mobilize. We looked to the UK Research Excellence Framework for inspiration (for more on REF and why it is important see this recent journal club entry). The REF required all UK universities to articulate the impacts of research using guidelines (page 26 here) and completing an impact case study template. There is much (not all) good about the REF. But there is much in the work of RIC that is not captured in the narrowly construed REF definitions of research and of impact. There is also a decoupling of the efforts made by institutions (as reported in the environment data) to support impact and the impact cases themselves.

The Evaluation Committee of Research Impact Canada did a deep dive into the REF and developed our own adaptation of the REF impact assessment guidelines and case study template. The major changes are summarized in the table below:

Changes between REF and RIC impact assessment table

View this table as a PDF

That’s what we have done. What are we doing?

We are piloting the RIC research impact assessment guidelines and impact case study template on one example from York’s Knowledge Mobilization Unit. We have also delivered our first impact assessment workshop to scientists and knowledge brokers at the research-policy interface at Eawag, the Swiss water research institute. We received good feedback from them and will be incorporating this into successive iterations of the guidelines and template. Ultimately we will roll this out through RIC member universities and beyond to provide a tool for researchers and institutions to collect the evidence of impact and inform different means of disseminating stories of impact so our various stakeholders (funders, partners, governments, and the public) can see the difference that universities make on society, the economy and/or the environment.

Our work in Canada is timely as HEFCE reports it has just finished consultations on REF 2021 and are about to review and analyze over 370 responses. We can continue to learn from each other. The Canadian and UK contexts are different. The main difference is the driver. We don’t have a REF in Canada so we have greater leeway to construct research impact assessment tools that work in our contexts. But our contexts are also not really that different. UK and Canadian funders require grant applicants to express the potential impacts of their research and the plans (and budgets) for creating those impacts. Now Canada also has a mechanism to facilitate the collection and reporting on the evidence of impact that was inspired by the REF but adapted to meet the needs of the Research Impact Canada network.

Give us a call, HEFCE. We’re happy to share as you pour through those 370 responses!

Canada is looking for a Chief Knowledge Broker / Le Canada en quête d’un courtier de connaissances en chef

Canada is searching for a Chief Science Advisor. They are looking for someone with an outstanding track record of scholarship. What they really need is a Chief Knowledge Broker.

Le gouvernement canadien cherche à pourvoir le poste de conseiller scientifique en chef. La personne recherchée doit posséder un bagage de connaissances hors du commun. Ce dont le gouvernement a besoin, en fait, c’est un courtier de connaissances en chef.

See the full job ad here

Kirsty Duncan, Minister of Science

Kirsty Duncan, Minister of Science


Justin Trudeau and Kirsty Duncan are seeking a PhD scientist with a strong record of peer reviewed publications and research management. They are seeking a PhD scientist to “focus on how scientific information is disseminated and used by the federal government, and how evidence is incorporated into government-wide decision-making”. Sounds like knowledge mobilization to me!

To be clear, they are not looking for a PhD in implementation science (although that would be excellent). They are looking for one of Canada’s greatest molecular biologists, cosmologists, mechanical engineers, neuroscientists, nanotechnologists etc. to step into a role that defines knowledge mobilization. Tell me how someone with an H Factor of +50 is going to know the first thing about how scientific information is disseminated and used in government decision making?

To be fair they do consider that “experience in one or more of the following areas would be an asset:

• involvement in scientific reviews within legislative or regulatory processes;
• public scientific communication;
• promoting transparency and integrity in scientific research; and
• evaluation of scientific or research programs or projects.”

Public scientific communication and involvement in legislative processes are considered to be one of four things (hence optional) for a job that is all about “how scientific information is disseminated and used”.

Kirsty and Justin, these need to be at the top of your list of mandatory experience, not buried as an optional nice to have. You need to be looking for someone with expertise in the research to policy interface. Canada’s best particle physicist will not be able to provide much help when asked to advise on sensitive topics such as vaccines, GMO foods or First Nations. You don’t need a specialist PhD. You need a process specialist who has demonstrated excellence facilitating evidence use in all sorts of disciplines.

Good luck with the search. Don’t forget to come back to Research Impact Canada for advice on seeking science advice.

[Sorry gentle readers, the competition closed February 13, 2017. You don’t need to rush to update your resumé for this job. But maybe the Chief Science Advisor will see that a PhD in whatever actually needs a knowledge broker to be successful.]