Frontiers in Psychology (Apr 2014)
A new approach to assessing intra-subject variability in single-subject designs
Abstract
Research Aim: One of the methodological challenges of single-subject designs is accounting for intra-individual variability in performance, which is commonly assessed by applying the same testing materials on multiple sessions (McReynolds & Thompson, 1986). This approach might be less feasible in some individuals, such as bilingual speakers with aphasia, who would need to be tested, using the same materials, on several occasions within as well as across languages. Repetitive exposure to the same testing materials can increase practice effects and further reduce the validity of the testing. In the present study, we explored an alternative approach to measuring stability in performance by comparing the use of identical vs. different (but comparable) testing materials. Method: Participants were five monolinguals with non-fluent aphasia following a single left CVA. Participants performed an action-naming task and two narrative tasks: a picture sequence and a personal narrative. There were two testing times, several weeks apart using identical materials. Each testing time included three consecutive sessions, using different materials. Action-naming performance was assessed by the number of correct verbs produced. The verbal output in the narrative tasks was scored for amount (number of utterances), grammaticality (percentage of grammatical sentences), and verb diversity (number of different verbs). Pearson correlation coefficients (r) were computed to establish the intra-subject variability across testing times and across sessions. The magnitude of the correlations was evaluated using published guidelines (Strauss, Sherman, & Spreen, 2006). Results: For action naming, the correlations ranged from high to very high (.87 to .98) across testing times and from adequate to very high across sessions (.74 to .96). For the narrative tasks, the correlations between the number of utterances across testing times ranged from adequate to very high (.76 to .97) and from low to very high across sessions (.47 to .99). There were low to very high correlations between the percentage of grammatical sentences across testing times (.27 to .93) and across sessions (.02 to .95). The number of different verbs showed adequate to very high correlations across testing times (.75 to .99) and marginal to very high correlations across sessions (.69 to .99). Conclusions: The findings indicate that repeated testing using identical and comparable materials result in correlations of similar magnitude, suggesting that comparable and identical testing materials yield similar information about intra-individual variability in performance. Given these findings, it seems methodologically sound to use non-identical stimuli to establish stability in performance and in this way minimize practice effects in testing procedures for patients with aphasia. McReynolds, L. V., & Thompson, C. K. (1986). Flexibility of Single-Subject Experimental Designs. Part I: Review of the Basics of Single-Subject Designs. Journal of Speech and Hearing Disorders, 51(3), 194-203. doi: 10.1044/jshd.5103.194 Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary. New York: Oxford University Press.
Keywords