Journal of Pathology Informatics (Jan 2021)

Effects of image quantity and image source variation on machine learning histology differential diagnosis models

  • Elham Vali-Betts,
  • Kevin J Krause,
  • Alanna Dubrovsky,
  • Kristin Olson,
  • John Paul Graff,
  • Anupam Mitra,
  • Ananya Datta-Mitra,
  • Kenneth Beck,
  • Aristotelis Tsirigos,
  • Cynthia Loomis,
  • Antonio Galvao Neto,
  • Esther Adler,
  • Hooman H Rashidi

DOI
https://doi.org/10.4103/jpi.jpi_69_20
Journal volume & issue
Vol. 12, no. 1
pp. 5 – 5

Abstract

Read online

Aims: Histology, the microscopic study of normal tissues, is a crucial element of most medical curricula. Learning tools focused on histology are very important to learners who seek diagnostic competency within this important diagnostic arena. Recent developments in machine learning (ML) suggest that certain ML tools may be able to benefit this histology learning platform. Here, we aim to explore how one such tool based on a convolutional neural network, can be used to build a generalizable multi-classification model capable of classifying microscopic images of human tissue samples with the ultimate goal of providing a differential diagnosis (a list of look-alikes) for each entity. Methods: We obtained three institutional training datasets and one generalizability test dataset, each containing images of histologic tissues in 38 categories. Models were trained on data from single institutions, low quantity combinations of multiple institutions, and high quantity combinations of multiple institutions. Models were tested against withheld validation data, external institutional data, and generalizability test images obtained from Google image search. Performance was measured with macro and micro accuracy, sensitivity, specificity, and f1-score. Results: In this study, we were able to show that such a model's generalizability is dependent on both the training data source variety and the total number of training images used. Models which were trained on 760 images from only a single institution performed well on withheld internal data but poorly on external data (lower generalizability). Increasing data source diversity improved generalizability, even when decreasing data quantity: models trained on 684 images, but from three sources improved generalization accuracy between 4.05' and 18.59'. Maintaining this diversity and increasing the quantity of training images to 2280 further improved generalization accuracy between 16.51' and 32.79'. Conclusions: This pilot study highlights the significance of data diversity within such studies. As expected, optimal models are those that incorporate both diversity and quantity into their platforms.s

Keywords