Squires, J. E., Estabrooks, C. A., Newburn-Cook, C. V. & Gierl, M. (2011). Validation of the conceptual research utilization scale: An application of the standards for educational and psychological testing in healthcare. BMC Health Services Research, 11(1), 107 -152. doi:10.1186/1472-6963-11-107
Abstract
Background: There is a lack of acceptable, reliable, and valid survey instruments to measure conceptual research utilization (CRU). In this study, we investigated the psychometric properties of a newly developed scale (the CRU Scale).
Methods: We used the Standards for Educational and Psychological Testing as a validation framework to assess four sources of validity evidence: content, response processes, internal structure, and relations to other variables. A panel of nine international research utilization experts performed a formal content validity assessment. To determine response process validity, we conducted a series of one-on-one scale administration sessions with 10 healthcare aides.
Internal structure and relations to other variables validity was examined using CRU Scale response data from a sample of 707 healthcare aides working in 30 urban Canadian nursing homes. Principal components analysis and confirmatory factor analyses were conducted to determine internal structure. Relations to other variables were examined using: (1) bivariate correlations; (2) change in mean values of CRU with increasing levels of other kinds of research utilization; and (3) multivariate linear regression.
Results: Content validity index scores for the five items ranged from 0.55 to 1.00. The principal components analysis predicted a 5-item 1-factor model. This was inconsistent with the findings from the confirmatory factor analysis, which showed best fit for a 4-item 1-factor model.
Bivariate associations between CRU and other kinds of research utilization were statistically significant (p<0.01) for the latent CRU scale score and all five CRU items. The CRU scale score was also shown to be significant predictor of overall research utilization in multivariate linear regression.
Conclusions: The CRU scale showed acceptable initial psychometric properties with respect to responses from healthcare aides in nursing homes. Based on our validity, reliability, and acceptability analyses, we recommend using a reduced (four-item) version of the CRU scale to yield sound assessments of CRU by healthcare aides. Refinement to the wording of one item is also needed. Planned future research will include: latent scale scoring, identification of variables that predict and are outcomes to conceptual research use, and longitudinal work to determine CRU Scale sensitivity to change.
This original research article seeks to test a tool to evaluate conceptual research use which is one of the three different uses of research: instrumental, conceptual and symbolic each of which, is believed to represent a single concept.
“Instrumental research utilization is a direct use of research knowledge. It refers to the concrete application of research in clinical practice, either in making specific decisions or as knowledge to guide specific interventions related to patient care. For instrumental use, the research is often translated into a material and useable form (e.g., a policy, protocol or guideline). Conceptual research utilization refers to the cognitive use of research where the research findings may change one’s opinion or mind set about a specific practice area but not necessarily one’s particular action. It is an indirect application of research knowledge. An example would be the use of knowledge on the importance of Family-Centered Care to guide clinical practice. Symbolic (or persuasive) research utilization is the use of research knowledge as a political tool in order to influence policies and decisions or to legitimate a position.” Referenced literature has shown that policy makers use research to inform a decision rather than to act on it directly.
These three different uses of research are a fundamental concept and form part of any KMb 101. They are important concepts for all KMb practitioners.
The authors wished to develop a scale to measure conceptual research use because “while the number of studies examining research utilization has increased significantly in the past decade, the majority continue to examine research utilization as a general construct or instrumentally. Conceptual use of research findings has received little attention” even though the authors feel that conceptual research use better describes the use of research in an individual practitioner situation.
The authors developed a new instrument to examine conceptual research use and evaluated the instrument for validity, reliability, and acceptability. The instrument was evaluated in 707 healthcare aides working in 30 urban Canadian nursing homes. There was an interesting difference in conceptual research use by health care workers who used English as a first language and those that had another first language. The instrument asks research users to evaluate 5 questions on a 5-point Likert-type frequency scale where 1 indicated ‘never’, 2 indicated ‘rarely’, 3 indicated ‘occasionally’, 4 indicated ‘frequently’ and 5 indicated ‘very frequently’. The instrument asks how often does best practice knowledge (they did not ask about “research” lead to the activities reflected in each of the following items:
- Give new knowledge or information?
- Raise awareness?
- Help change your mind?
- Give new ideas?
- Help make sense of things?
The authors state that these findings are not generalizable beyond healthcare aides in Canadian nursing homes that speak English as their first language. That maybe true but that makes this paper of limited value to a broader KMb practice. As practitioners this research points a way forward for thinking about measuring conceptual research use. Sure there’s no evidence this scale is transferable outside of their controlled setting but if you don’t have anything else, this gives you a place to start.
Key Points for discussion:
- The difference between instrumental, conceptual and symbolic use is important as brokers seek to create relationships that respond to the needs of researchers and decision makers. Different KMb methods might be needed if you seek different outcomes or are seeking to inform instrumental, conceptual or symbolic research utilization.
- It is important to have a tool to evaluate the effect of your KMb intervention, although validated tools are few and far between.
- The tool might not be validated outside of the present setting but go ahead. Try it. And let Carol Estabrooks’ team know what happened.
Beware: there are lots of stats in this paper. “Before running the PCA, the Kaiser-Meyer-Olkin measure of sampling adequacy and the Bartlett test of sphericity were assessed to determine if the data was appropriate for PCA”…and… “The RMSEA (0.140) did not support close fit but SRMSR (0.03) and CFI (0.977) did support close fit.”
This is not an easy read for KMb practitioners.
For another excellent article by Estabrooks see “The intellectual structure and substance of the knowledge utilization field: A longitudinal author co-citation analysis, 1945 to 2004” This paper maps the evolution of the broad research to policy/practice field, traces the main theories underpinning different stages of development and follows the evolution of the intellectual structure of the field and the different words we use to describe our work.
ResearchImpact-RéseauImpactRecherche (RIR) is producing this journal club series as a way to make evidence on KMb more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.