PLoS ONE (Jan 2017)

A comparison of multiple testing adjustment methods with block-correlation positively-dependent tests.

  • John R Stevens,
  • Abdullah Al Masud,
  • Anvar Suyundikov

DOI
https://doi.org/10.1371/journal.pone.0176124
Journal volume & issue
Vol. 12, no. 4
p. e0176124

Abstract

Read online

In high dimensional data analysis (such as gene expression, spatial epidemiology, or brain imaging studies), we often test thousands or more hypotheses simultaneously. As the number of tests increases, the chance of observing some statistically significant tests is very high even when all null hypotheses are true. Consequently, we could reach incorrect conclusions regarding the hypotheses. Researchers frequently use multiplicity adjustment methods to control type I error rates-primarily the family-wise error rate (FWER) or the false discovery rate (FDR)-while still desiring high statistical power. In practice, such studies may have dependent test statistics (or p-values) as tests can be dependent on each other. However, some commonly-used multiplicity adjustment methods assume independent tests. We perform a simulation study comparing several of the most common adjustment methods involved in multiple hypothesis testing, under varying degrees of block-correlation positive dependence among tests.