Social Sciences (Feb 2019)

Measurement Invariance of a Direct Behavior Rating Multi Item Scale across Occasions

  • Markus Gebhardt,
  • Jeffrey M. DeVries,
  • Jana Jungjohann,
  • Gino Casale,
  • Andreas Gegenfurtner,
  • Jörg-Tobias Kuhn

DOI
https://doi.org/10.3390/socsci8020046
Journal volume & issue
Vol. 8, no. 2
p. 46

Abstract

Read online

Direct Behavior Rating (DBR) as a behavioral progress monitoring tool can be designed as longitudinal assessment with only short intervals between measurement points. The reliability of these instruments has been mostly evaluated in observational studies with small samples based on generalizability theory. However, for a standardized use in the pedagogical field, a larger and broader sample is required in order to assess measurement invariance between different participant groups and over time. Therefore, we constructed a DBR, the Questionnaire for Monitoring Behavior in Schools (QMBS) with multiple items to measure the occurrence of specific externalizing and internalizing student classroom behaviors on a Likert scale (1 = never to 7 = always). In a pilot study, two trained raters observed 16 primary education students and rated the student behavior over all items with a satisfactory reliability. In the main study, 108 regular primary school students, 97 regular secondary students, and 14 students in a clinical setting were rated daily over one week (five measurement points). Item response theory (IRT) analyses confirmed the technical adequacy of the instrument and latent growth models demonstrated the instrument’s stability over time. Further development of the instrument and study designs to implement DBRs is discussed.

Keywords