Global Education Review (Oct 2018)

Different analyses, different validity conclusions? Evidence from the EGMA Spatial Reasoning Subtask

  • Lindsey Perry

Journal volume & issue
Vol. 5, no. 3
pp. 125 – 142

Abstract

Read online

As the global development community shifts its focus from improving access to education to improving learning and instruction, the need for instruments that accurately measure student achievement in mathematics and meet technical standards is increasing. This paper explores the importance of collecting high-quality validity evidence that aligns with an instrument’s intended uses and interpretations by discussing a new subtask developed for the Early Grade Mathematics Assessment (EGMA). The EGMA Spatial Reasoning subtask was developed by RTI International with funding from the United States Agency for International Development (USAID). To collect validity evidence to support the assumption that the EGMA Spatial Reasoning subtask could be used to determine overall student proficiency in spatial reasoning, the items developed for the subtask were pilot tested with 1,426 students in Jordan. Pilot test data was initially analyzed using Item Response Theory. However, Item Response Theory assumptions were not met, thus, supplemental analyses were conducted using Classical Test Theory. There were differences in the findings using the two different methods, which impacts the interpretations made using this instrument. This paper illustrates the importance of choosing analytic techniques that align with an instrument’s intended use in order to make valid interpretations from the data to inform policy and practice.

Keywords