Computers and Education Open (Dec 2021)

Concurrent and retrospective metacognitive judgements as feedback in audience response systems: Impact on performance and self-assessment accuracy

  • Pantelis M. Papadopoulos,
  • Nikolaus Obwegeser,
  • Armin Weinberger

Journal volume & issue
Vol. 2
p. 100046

Abstract

Read online

Asking questions in classrooms can produce metacognitive judgements in students about their confidence in being able to answer correctly. In audience response systems (ARSs), these judgements can be elicited and used as additional feedback metrics. This study (n = 79) explores how online concurrent item-by-item judgments (OCJ) and retrospective composite judgments of performance accuracy (RJPA) can enhance students’ performance and self-assessing accuracy (i.e., calibration – as measured by sensitivity, specificity, and absolute accuracy index). In each of eight weeks, the students answered a multiple-choice quiz and had to denote their level of confidence that their answers were correct (OCJ) and estimate their final score (RJPA). The quizzes followed the voting/revoting paradigm according to which students answer all the quiz questions, receive feedback, and answer the same questions again before the correct answers are shown. The students were randomly grouped into two conditions based on the feedback they received in the ARS: the OCJ group (n = 41) received the percentage distribution and peers’ OCJs as feedback metrics, while the RJPA group (n = 38) received the percentage distribution and peers’ RJPAs. Data analysis showed a systemic underconfidence that affected students’ OCJ judgments. As a result, students in the RJPA group scored significantly higher than the ones in the OCJ one, were more accurate in self-assessing in the revoting phase, and felt overall more confident in the revoting phase. The study also discusses the relationship between the two judgments employed and the calibration variability between the two study phases.

Keywords