SAGE Open (Aug 2020)

Latent Structure of Executive Functioning/Learning Tasks in the CogState Computerized Battery

  • Elisabeth Nordenswan,
  • Eeva-Leena Kataja,
  • Kirby Deater-Deckard,
  • Riikka Korja,
  • Mira Karrasch,
  • Matti Laine,
  • Linnea Karlsson,
  • Hasse Karlsson

DOI
https://doi.org/10.1177/2158244020948846
Journal volume & issue
Vol. 10

Abstract

Read online

This study tested whether executive functioning (EF)/learning tasks from the CogState computerized test battery show a unitary latent structure. This information is important for the construction of composite measures on these tasks for applied research purposes. Based on earlier factor analytic research, we identified five CogState tasks that have been labeled as EF/learning tasks and examined their intercorrelations in a new sample of Finnish birth cohort mothers ( N = 233). Using confirmatory factor analyses, we compared two single-factor EF/learning models. The first model included the recommended summative scores for each task. The second model exchanged summative scores for first test round results for the three tasks providing these data, as initial task performance is expected to load more heavily on EF. A single-factor solution provided a good fit for the present five EF/learning tasks. The second model, which was hypothesized to tap more onto EF, had slightly better fit indices, χ 2 (5) = 1.37, p = .93, standardized root mean square residual (SRMR) = .02, root mean square error of approximation (RMSEA) = .00, 90% CI = [.00–.03], comparative fit index (CFI) = 1.00, and more even factor loadings (.30–.56) than the first model, χ 2 (5) = 4.56, p = .47, SRMR = .03, RMSEA = .00, 90% CI = [.00–.09], CFI = 1.00, factor loadings (.20–.74), which was hypothesized to tap more onto learning. We conclude that the present CogState sum scores can be used for studying EF/learning in healthy adult samples, but call for further research to validate these sum scores against other EF tests.