Physical Review Physics Education Research (Oct 2023)
Assessment tool to understand how students justify their decisions in data comparison problems
Abstract
Students at all levels of education experience difficulties with the concepts of measurement uncertainties. One task that includes concepts of measurement uncertainties is a data comparison problem where students decide whether two datasets are in agreement or not—an authentic scientific practice. To aid students with these concepts and tasks, sensitive instruments are needed that probe students’ understanding and track their progress. We present a tool in the form of a coding manual that analyzes students’ written justifications for such a data comparison problem. With this coding manual, justifications are coded based on the quantity that is compared and the deciding criterion in this comparison. Our coding manual was used in the evaluation of a digital learning environment (DLE). In this evaluation, 154 participants (aged 14–17 years) wrote a justification for a data comparison problem before and after going through the DLE. These participants were randomly assigned to one of three groups where each group was taught an increased amount of concepts regarding measurement uncertainties. By analyzing the participants’ justifications with our coding manual, we could probe their ability to compare datasets on a fine-grained level. The sensitivity of our tool was illustrated by the increase in higher quality codes with the increased conceptual teaching between the three groups. We argue that our tool for coding justifications can be applied to any data comparison problem giving detailed information about what data analysis methods students use and how they reach their conclusion and thus contributes to the research on students’ understanding of measurement uncertainties.