Frontiers in Psychology (Jan 2019)
Multisource Assessment for Development Purposes: Revisiting the Methodology of Data Analysis
Abstract
Multisource assessment (MSA) is based on the belief that assessments are valid inferences about an individual’s behavior. When used for performance management purposes, convergence of views among raters is important, and therefore testing factor invariance across raters is critical. However, when MSA is used for development purposes, raters usually come from a greater number of contexts, a fact that requires a different data analysis approach. We revisit the MSA data analysis methodology when MSA is used for development, with the aim of improving its effectiveness. First, we argue that having raters from different contexts is an integral element of the assessment, with the trait–context dyad being the actual latent variable. This leads to the specification of an Aggregate (instead of the usual Latent) multidimensional factor model. Second, since data analysis usually aggregates scores for each rater group into a single mean that is then compared with the self-rating score, we propose that the test for factor invariance must also include scalar invariance, a pre-requisite for mean comparison. To illustrate this methodology we conducted a 360° survey on a sample of over 1100 MBA students enrolled in a leadership development course. Finally, by means of the study we show how the survey can be customized to each rater group to make the MSA process more effective.
Keywords