Frontiers in Psychology (Apr 2014)

How to become twice more precise in detecting neuropsychological impairments

  • Davide Crepaldi,
  • Alessandra Casarotti,
  • Alessandra Casarotti,
  • Barbara Zarino,
  • Costanza Papagno

DOI
https://doi.org/10.3389/conf.fpsyg.2014.64.00065
Journal volume & issue
Vol. 5

Abstract

Read online

Introduction Although it was a giant leap forward when it was introduced, the classic approach to the norming of neuropsychological tests (Capitani, 1987) has two main limitations: (i) it doesn’t consider possible interactions between covariates (e.g., age and education); (ii) working on by–subject percentages of correct responses, it cannot consider item covariates (e.g., frequency, length, imageability) that are known to affect performance substantially. Here we show how to overcome these limitations, and how this improves our diagnosis. Methods As a test case, we used the action–naming task devised by Crepaldi et al. (2006). 290 healthy Italian speakers (148 F and 142 M) made up our norming sample. They ranged 18 to 98 years in age (M=54.1) and 3 to 23 years in education (M=12.3). The test was standardized: (i) following the classic Capitani approach, based on regressing by–subject mean accuracy on gender, age and education; (ii) adding interactions between gender, age and education to the classic approach; (iii) using raw correct/incorrect scores thanks to mixed–effect models, thus being able to take under control also item variables (e.g., frequency, imageability) and any additional item– or subject–specific random variability (mixed–effects norming; Jaeger, 2008). Results We first contrasted the three approaches in terms of their amount of explained variance in the computation of expected scores, that is, their ability to “wash out” unwanted effects from expected scores. The classic Capitani norming explained 34.7% of the total variance; interaction norming went up to 43.7%; and mixed–effects norming ensured 73% of explained variance. A comparison between expected scores according to Capitani and mixed–effects norming in 2,600 combinations of age (20–85) and education values (3-22) revealed an overall correlation of .81. Differences were generally bigger with young age and low education, and with old age and high education; and typically showed higher expected scores by mixed–effects than Capitani norming (see Figure 1). Equivalent Scores (ES) were then computed for a simulated sample of around 80,000 patients ranging in age (20–85 years), education (3–22 years), gender (M–F) and raw scores (20–50 correct responses). They disagree 28% of the times according to the two approaches. The difference (in either direction) is 1.28 in mean, and ranges from 1 to 4. Mixed–effects ES is lower than Capitani ES in 61% of the cases. Among the 45070 simulated patients that would be classified as impaired at naming verbs (ES=0) according to Capitani norming, 5% would not be classified as such (ES>0) according to mixed–effects norming. These figures are closely replicated in a sample of 69 unselected aphasic patients. Discussion The data indicate that using a better statistical model to calculate expected scores is highly beneficial in terms of amount of explained variance, that is, we can gauge the true effects of subject (e.g., age) and item variables (e.g., frequency) much more precisely, thus enjoying higher–quality corrected scores. Critically, this reflects substantially into how (simulated and actual) patients are classified.

Keywords