Implementation Science Communications (Feb 2023)
Enhancing review criteria for dissemination and implementation science grants
Abstract
Abstract Background The existing grant review criteria do not consider unique methods and priorities of Dissemination and Implementation Science (DIS). The ImplemeNtation and Improvement Science Proposals Evaluation CriTeria (INSPECT) scoring system includes 10 criteria based on Proctor et al.’s “ten key ingredients” and was developed to support the assessment of DIS research proposals. We describe how we adapted INSPECT and used it in combination with the NIH scoring system to evaluate pilot DIS study proposals through our DIS Center. Methods We adapted INSPECT to broaden considerations for diverse DIS settings and concepts (e.g., explicitly including dissemination and implementation methods). Five PhD-level researchers with intermediate to advanced DIS knowledge were trained to conduct reviews of seven grant applications using both the INSPECT and NIH criteria. The INSPECT overall scores range from 0 to 30 (higher scores are better), and the NIH overall scores range from 1 to 9 (lower scores are better). Each grant was independently reviewed by two reviewers, then discussed in a group meeting to compare the experiences using both criteria to evaluate the proposal and to finalize scoring decisions. A follow-up survey was sent to grant reviewers to solicit further reflections on each scoring criterion. Results Averaged across reviewers, the INSPECT overall scores ranged from 13 to 24, while the NIH overall scores ranged from 2 to 5. Reviewer reflections highlighted the unique value and utility for each scoring criterion. The NIH criteria had a broad scientific purview and were better suited to evaluate more effectiveness-focused and pre-implementation proposals not testing implementation strategies. The INSPECT criteria were easier to rate in terms of the quality of integrating DIS considerations into the proposal and to assess the potential for generalizability, real-world feasibility, and impact. Overall, reviewers noted that INSPECT was a helpful tool to guide DIS research proposal writing. Conclusions We confirmed complementarity in using both scoring criteria in our pilot study grant proposal review and highlighted the utility of INSPECT as a potential DIS resource for training and capacity building. Possible refinements to INSPECT include more explicit reviewer guidance on assessing pre-implementation proposals, providing reviewers with the opportunity to submit written commentary with each numerical rating, and greater clarity on rating criteria with overlapping descriptions.
Keywords