IEEE Access (Jan 2022)

Breast Cancer Diagnosis in Two-View Mammography Using End-to-End Trained EfficientNet-Based Convolutional Network

  • Daniel G. P. Petrini,
  • Carlos Shimizu,
  • Rosimeire A. Roela,
  • Gabriel Vansuita Valente,
  • Maria Aparecida Azevedo Koike Folgueira,
  • Hae Yong Kim

DOI
https://doi.org/10.1109/ACCESS.2022.3193250
Journal volume & issue
Vol. 10
pp. 77723 – 77731

Abstract

Read online

Some recent studies have described deep convolutional neural networks to diagnose breast cancer in mammograms with similar or even superior performance to that of human experts. One of the best techniques does two transfer learnings: the first uses a model trained on natural images to create a “patch classifier” that categorizes small subimages; the second uses the patch classifier to scan the whole mammogram and create the “single-view whole-image classifier”. We propose to make a third transfer learning to obtain a “two-view classifier” to use the two mammographic views: bilateral craniocaudal and mediolateral oblique. We use EfficientNet as the basis of our model. We “end-to-end” train the entire system using CBIS-DDSM dataset. To ensure statistical robustness, we test our system twice using: (a) 5-fold cross validation; and (b) the original training/test division of the dataset. Our technique reached an AUC of 0.9344 using 5-fold cross validation (accuracy, sensitivity and specificity are 85.13% at the equal error rate point of ROC). Using the original dataset division, our technique achieved an AUC of 0.8483, as far as we know the highest reported AUC for this problem, although the subtle differences in the testing conditions of each work do not allow for an accurate comparison. The inference code and model are available at https://github.com/dpetrini/two-views-classifier

Keywords