Large-scale Assessments in Education (Jul 2024)

Gender differences in literacy in PIAAC: do assessment features matter?

  • Ai Miyamoto,
  • Britta Gauly,
  • Anouk Zabal

DOI
https://doi.org/10.1186/s40536-024-00208-9
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 18

Abstract

Read online

Abstract Background Previous research based on large-scale studies consistently suggests that on average, male students tend to have lower literacy compared to their female students during secondary schooling. However, this gender gap in literacy seems to “disappear” during adulthood. Up until today, only a few studies investigated the role of assessment features in gender differences in literacy performance in adulthood. This study aims to understand the relationship between assessment features and gender differences in literacy skills. Methods Using the German 2012 PIAAC data (N = 4,512), we applied item-level analyses using linear probability models to examine gender differences in the probability of solving a literacy item correctly with six assessment features including (1) text format, (2) text topics, (3) text length, (4) cognitive strategies, (5) numerical content of the text/questions, and (6) gender typicality of content. Results We found that men had a 13.4% higher probability of solving items with a noncontinuous text format correctly than women. Men also had a 9.4% higher probability of solving short text items correctly and a 4.6% higher probability of solving items with a medium/high numerical content in the question correctly than women. There were small to negligible gender differences in literacy performance in terms of text topics, cognitive strategies, and gender typicality of content. Conclusions Our findings highlight the role of text format, text length, and numerical content in gender differences in literacy skills, suggesting that further refining these practices can enhance the fairness and accuracy of literacy assessments. Specifically, we advocate for ongoing research aimed at understanding and minimizing the potential bias introduced by these assessment features. Such efforts are not only crucial for developing instruments that accurately measure literacy skills, but they also yield insights that hold significant implications for educational researchers and practitioners dedicated to creating more equitable assessment environments.