Cogent Education (Jan 2018)
Investigating the validity of oral assessment rater training program: A mixed-methods study of raters’ perceptions and attitudes before and after training
Abstract
Rater variability has always been identified as an important source of measurement error in performance assessment, especially for oral proficiency tests. Rater training is commonly used as a means for compensating various sources of rater variability and adjusting their assessment quality. However, there is little research regarding the nature of training programs and raters’ perception using both a qualitative and a quantitative research design. Despite previous data on test takers’ reactions to oral test performance, there is little research regarding the application of test feedback and raters’ perceptions of the given feedback on their scoring performance and its probable usefulness on raters’ ratings. Twenty raters rated 300 test takers’ oral performances before and after a training program and their perceptions, attitudes, expectation, and evaluations were identified via questionnaires, interviews, and observations. The findings of qualitative and quantitative data analyses demonstrated that training programs are quite useful in satisfying their attitudes, perceptions, and evaluations about it. This will definitely result in the reduction of their severity and biases and increase in their consistency level. Besides, informing raters of the goals of performance assessment in training programs will result in less halo effect. Finally, those having positive attitudes toward rating feedback were able to incorporate it more successfully in their rating and thus achieved more consistency and less biasedness in their subsequent ratings. Consequently, decision-makers should not be concerned about raters’ expertise levels, but they should establish rater training programs to increase rater consistency and reduce their biases in measurement.
Keywords