F1000Research (Sep 2016)

Whose sample is it anyway? Widespread misannotation of samples in transcriptomics studies [version 2; referees: 2 approved]

  • Lilah Toker,
  • Min Feng,
  • Paul Pavlidis

DOI
https://doi.org/10.12688/f1000research.9471.2
Journal volume & issue
Vol. 5

Abstract

Read online

Concern about the reproducibility and reliability of biomedical research has been rising. An understudied issue is the prevalence of sample mislabeling, one impact of which would be invalid comparisons. We studied this issue in a corpus of human transcriptomics studies by comparing the provided annotations of sex to the expression levels of sex-specific genes. We identified apparent mislabeled samples in 46% of the datasets studied, yielding a 99% confidence lower-bound estimate for all studies of 33%. In a separate analysis of a set of datasets concerning a single cohort of subjects, 2/4 had mislabeled samples, indicating laboratory mix-ups rather than data recording errors. While the number of mixed-up samples per study was generally small, because our method can only identify a subset of potential mix-ups, our estimate is conservative for the breadth of the problem. Our findings emphasize the need for more stringent sample tracking, and that re-users of published data must be alert to the possibility of annotation and labelling errors.

Keywords