Lebel, J. & McLean, R. (2018). A better measure of research from the global south. Nature, 559, 23-26. https://www.nature.com/articles/d41586-018-05581-4
Funders Jean Lebel and Robert McLean describe a new tool for judging the value and validity of science that attempts to improve lives.
This is a good article for anyone interested in assessing the research quality and utility.
This Nature Comment article is best read alongside the original 2016 IDRC report which provides more detail on implementing the Research Quality Plus (RQ+) research assessment tool developed and tested by Canada’s International Development Research Centre (IDRC).
The RQ+ tool is predicated on the observation that “Dominant techniques of research evaluation take a narrow view of what constitutes quality, thus undervaluing unique solutions to unique problems.” Current methods (peer review and citation analysis) do not give equal recognition to local researchers. They assess local and foreign researchers equally while their motivations and incentives might be different (i.e. they ignore local context). RQ+ “recognizes that scientific merit is necessary, but not sufficient. It acknowledges the crucial role of stakeholders and users in determining whether research is salient and legitimate. It focuses attention on how well scientists position their research for use, given the mounting understanding that uptake and influence begins during the research process, not only afterwards.” These are the four primary dimensions (with sub dimensions) of RQ+ assessment: integrity, legitimacy, importance, positioning for use.
Assessment is a four-step process (best described in the original 2016 report):
- Select the project and data sources (research outputs, interviews, related documents)
- Characterize key influences which uses a rubric (low, medium, high) to examine the context of the research: maturity of the research field; research capacity strengthening; risk in the data environment; risk in the political environment; risk in the research environment
- Rate the quality of the research on the four primary dimensions (above) and their subdimensions using an 8-point scale
- Synthesize the ratings (among parallel reviewers) rolling up across projects, programs and portfolios (i.e. RQ+ scales from a project to a portfolio).
IDRC did a retrospective analysis of 170 projects to validate RQ+ and were able to bust three myths. First: southern only research is high quality (i.e. you don’t need researchers from an industrialized nation to do high quality research). Second: capacity strengthening and research excellence go hand in hand. Third: Research can be rigorous and useful (something Amanda Cooper demonstrated in last month’s journal club).
The IDRC review was done ex post but IDRC is looking at how they might use it ex ante for grant selection as well as in a formative fashion to monitor progress of individual projects.
One thing I don’t yet understand is the relationship between step 2 and step 3. I understand both of them but I don’t (yet) understand how characterizing key influences on a 3 point scale influences the rating of each of the dimensions and subdimensions on the 8 point scale because according to Figure 5 in the original 2016 report it appears that it is the 8 point ratings that are rolled up for the overall assessment. What role then for the key influencers?
Questions for brokers
- Can you figure out the one thing I don’t understand, above?
- The REF provides little guidance to promote consistency for both creating and assessing research impact case studies. Might this tool create greater consistency, at least for research excellence (and for research engagement if you’re subject to Excellence in Research for Australia exercise)?
- RQ+ is concerned with assessing research excellence and its “positioning for use”. This clearly includes research engagement and is pointing towards impact assessment but isn’t there yet. What would you do to add an element of impact assessment to the RQ+ method?
Research Impact Canada is producing this journal club series to make evidence on Knowledge Mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.