BMC Public Health (Oct 2006)
Approaches to the evaluation of outbreak detection methods
Abstract
Abstract Background An increasing number of methods are being developed for the early detection of infectious disease outbreaks which could be naturally occurring or as a result of bioterrorism; however, no standardised framework for examining the usefulness of various outbreak detection methods exists. To promote comparability between studies, it is essential that standardised methods are developed for the evaluation of outbreak detection methods. Methods This analysis aims to review approaches used to evaluate outbreak detection methods and provide a conceptual framework upon which recommendations for standardised evaluation methods can be based. We reviewed the recently published literature for reports which evaluated methods for the detection of infectious disease outbreaks in public health surveillance data. Evaluation methods identified in the recent literature were categorised according to the presence of common features to provide a conceptual basis within which to understand current approaches to evaluation. Results There was considerable variation in the approaches used for the evaluation of methods for the detection of outbreaks in public health surveillance data, and appeared to be no single approach of choice. Four main approaches were used to evaluate performance, and these were labelled the Descriptive, Derived, Epidemiological and Simulation approaches. Based on the approaches identified, we propose a basic framework for evaluation and recommend the use of multiple approaches to evaluation to enable a comprehensive and contextualised description of outbreak detection performance. Conclusion The varied nature of performance evaluation demonstrated in this review supports the need for further development of evaluation methods to improve comparability between studies. Our findings indicate that no single approach can fulfil all evaluation requirements. We propose that the cornerstone approaches to evaluation identified provide key contributions to support internal and external validity and comparability of study findings, and suggest these be incorporated into future recommendations for performance assessment.