Journal of MultiDisciplinary Evaluation (Oct 2007)
The Logic of Summative Confidence
Abstract
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data sources, for example. Moreover, CIs ignore the impact of sampling and measurement error. Considering that the vast majority of evaluative conclusions are based on numerous criteria of merit that often are poorly measured, a significant gap exists with respect to how one can estimate the CI of an evaluative conclusion. The purpose of this paper is (1) to heighten reader consciousness about the consequences of utilizing a weak evaluation design and (2) to introduce the need for the development a methodology that can be used to characterize the precision of an evaluative conclusion.
Keywords