Physical Review Physics Education Research (Dec 2016)
Validity of peer grading using Calibrated Peer Review in a guided-inquiry, conceptual physics course
Abstract
Constructing and evaluating explanations are important science practices, but in large classes it can be difficult to effectively engage students in these practices and provide feedback. Peer review and grading are scalable instructional approaches that address these concerns, but which raise questions about the validity of the peer grading process. Calibrated Peer Review (CPR) is a web-based system that scaffolds peer evaluation through a “calibration” process where students evaluate sample responses and receive feedback on their evaluations before evaluating their peers. Guided by an activity theory framework, we developed, implemented, and evaluated CPR-based tasks in guided-inquiry, conceptual physics courses for future teachers and general education students. The tasks were developed through iterative testing and revision. Effective tasks had specific and directed prompts and evaluation instructions. Using these tasks, over 350 students at three universities constructed explanations or analyzed physical phenomena, and evaluated their peers’ work. By independently assessing students’ responses, we evaluated the CPR calibration process and compared students’ peer reviews with expert evaluations. On the tasks analyzed, peer scores were equivalent to our independent evaluations. On a written explanation item included on the final exam, students in the courses using CPR outperformed students in similar courses using traditional writing assignments without a peer evaluation element. Our research demonstrates that CPR can be an effective way to explicitly include the science practices of constructing and evaluating explanations into large classes without placing a significant burden on the instructor.