The Leiden Manifesto for Research Metrics

Hicks, D. & Wouters, P. (2015). The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431.
https://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351
No Abstract: “Use these ten principles to guide research evaluation, urge Diana Hicks, Paul Wouters and colleagues
This article lays out 10 principles of research evaluation. In a world increasingly obsessed with research metrics (quality – or excellence – as well as impact) we need some principles by which we can either create research evaluation systems or assess those systems by which we are evaluated. This manifesto comes from the group that administers the Leiden Rankings, one of the global university ranking systems. At first this seems like a potential conflict of interest but as you will see, those who are running a ranking system based on bibliometrics are critical of making conclusions based solely on numbers.
Their 10 principles are listed below with commentary on some that can be extrapolated to thinking about impact assessment systems:
1. Quantitative evaluation should support qualitative, expert assessment
No numbers without stories and no stories without numbers. Research on the UK Research Excellence Framework (REF) concludes that the narrative or case study is the correct tool for impact assessment but the stories need to have some numbers to illustrate the claims of impact.
2. Measure performance against the research missions of the institution, group or researcher
3. Protect excellence in locally relevant research
Key for impact assessment. Many impacts, at least to start, are local as relationships tend to be local. These should not be discounted against national or global impacts, especially since those latter impacts are usually distal from the original research. It depends on why you are doing your impact assessment. If you’re assessing the actions of a researcher or the institution to engage for impact then local is critical. If you’re assessing national or international networks then scale becomes relevant.
4. Keep data collection and analytical processes open, transparent and simple
Open and transparent yes; however, I have never seen a “simple” impact assessment method although use of tools and skilled personnel can reduce transaction costs
5. Allow those evaluated to verify data and analysis
6. Account for variation by field in publication and citation practices
Similar for impact assessment. Impacts of the humanities should not be compared against impact of engineering. Academic and non-academic experts should identify what counts as impact in their fields.
7. Base assessment of individual researchers on a qualitative judgement of their portfolio
Impact assessment should not be on a single publication but on a body of published work and the experience of the researcher(s)
8. Avoid misplaced concreteness and false precision
9. Recognize the systemic effects of assessment and indicators
Any impact assessment system will by nature affect the system it is measuring. Institutions and individuals will change behaviour to maximize performance. This may or may not be a good thing.
10. Scrutinize indicators regularly and update them
What is implicit but not explicitly mentioned is that if you are going to go beyond collecting the evidence of impact and develop an assessment system then someone is going to be assessed against the performance of someone else. The indicators and methods you choose must be public, verifiable and comparable across those being assessed.
Questions for brokers:
1. Are you collecting the evidence of impact from your research? How do your methods and indicators stack up against these 10 principles?
2. Look at research evaluation systems that have impact as a feature (UK REF, NZ PBRF, Australia Engagement and Impact Assessment, NL Standard Evaluation Protocol, others?). How do they measure up against these principles?
3. Where is impact assessment in your jurisdiction? (Hint: if you don’t think there is one it is likely someone in a policy capacity is talking about it)
Research Impact Canada is producing this journal club series as a way to make evidence on Knowledge Mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.