Canadian Medical Education Journal (Apr 2024)

‘What would my peers say?’ Comparing the opinion-based method with the prediction-based method in Continuing Medical Education course evaluation

  • Jamie S Chua,
  • Merel van Diepen,
  • Marjolijn D Trietsch,
  • Friedo W Dekker,
  • Johanna Schönrock-Adema,
  • Jacqueline Bustraan

DOI
https://doi.org/10.36834/cmej.77580

Abstract

Read online

Background: Although medical courses are frequently evaluated via surveys with Likert scales ranging from “strongly agree” to “strongly disagree,” low response rates limit their utility. In undergraduate medical education, a new method with students predicting what their peers would say, required fewer respondents to obtain similar results. However, this prediction-based method lacks validation for continuing medical education (CME), which typically targets a more heterogeneous group than medical students. Methods: In this study, 597 participants of a large CME course were randomly assigned to either express personal opinions on a five-point Likert scale (opinion-based method; n = 300) or to predict the percentage of their peers choosing each Likert scale option (prediction-based method; n = 297). For each question, we calculated the minimum numbers of respondents needed for stable average results using an iterative algorithm. We compared mean scores and the distribution of scores between both methods. Results: The overall response rate was 47%. The prediction-based method required fewer respondents than the opinion-based method for similar average responses. Mean response scores were similar in both groups for most questions, but prediction-based outcomes resulted in fewer extreme responses (strongly agree/disagree). Conclusions: We validated the prediction-based method in evaluating CME. We also provide practical considerations for applying this method.