Medical Education Online (Dec 2002)
Evaluation of Speakers at a National Continuing Medical Education (CME) Course
Abstract
Purpose: Evaluations of a national radiology continuing medical education (CME) course in thoracic imaging were analyzed to determine what constitutes effective and ineffective lecturing. Methods and Materials: Evaluations of sessions and individual speakers participating in a five-day course jointly sponsored by the Society of Thoracic Radiology (STR) and the Radiological Society of North America (RSNA) were tallied by the RSNA Department of Data Management and three members of the STR Training Committee. Comments were collated and analyzed to determine the number of positive and negative comments and common themes related to ineffective lecturing. Results: Twenty-two sessions were evaluated by 234 (75.7%) of 309 professional registrants. Eighty-one speakers were evaluated by an average of 153 registrants (range, 2 313). Mean ratings for 10 items evaluating sessions ranged from 1.28 2.05 (1=most positive, 4=least positive; SD .451 - .902). The average speaker rating was 5.7 (1=very poor, 7=outstanding; SD 0.94; range 4.3 6.4). Total number of comments analyzed was 862, with 505 (58.6%) considered positive and 404 (46.9%) considered negative (the total number exceeds 862 as a comment could consist of both positive and negative statements). Poor content was mentioned most frequently, making up 107 (26.5%) of 404 negative comments, and applied to 51 (63%) of 81 speakers. Other negative comments, in order of decreasing frequency, were related to delivery, image slides, command of the English language, text slides, and handouts. Conclusions: Individual evaluations of speakers at a national CME course provided information regarding the quality of lectures that was not provided by evaluations of grouped presentations. Systematic review of speaker evaluations provided specific information related to the types and frequency of features related to ineffective lecturing. This information can be used to design CME course evaluations, design future CME course outcomes studies, provide training to presenters, and monitor presenter performance.