IEEE Access (Jan 2020)

Cross-Organ, Cross-Modality Transfer Learning: Feasibility Study for Segmentation and Classification

  • Juhun Lee,
  • Robert M. Nishikawa

DOI
https://doi.org/10.1109/ACCESS.2020.3038909
Journal volume & issue
Vol. 8
pp. 210194 – 210205

Abstract

Read online

We conducted two analyses by comparing the transferability of a traditionally transfer-learned CNN (TL) to that of a CNN fine-tuned with an unrelated set of medical images (mammograms in this study) first and then fine-tuned a second time using TL, which we call the cross-organ, cross-modality transfer learned (XTL) network, on 1) multiple sclerosis (MS) segmentation of brain magnetic resonance (MR) images and 2) tumor malignancy classification of multi-parametric prostate MR images. We used 2133 screening mammograms and two public challenge datasets (longitudinal MS lesion segmentation and ProstateX) as intermediate and target datasets for XTL, respectively. We used two CNN architectures as basis networks for each analysis and fine-tuned it to match the target image types (volumetric) and tasks (segmentation and classification). We evaluated the XTL networks against the traditional TL networks using Dice coefficient and AUC as figure of merits for each analysis, respectively. For the segmentation test, XTL networks outperformed TL networks in terms of Dice coefficient (Dice coefficients of 0.72 vs [0.70 - 0.71] with p-value <; 0.0001 in differences). For the classification test, XTL networks (AUCs = 0.77 - 0.80) outperformed TL networks (AUC = 0.73 - 0.75). The difference in the AUCs (AUCdiff = 0.045 - 0.047) was statistically significant (p-value <; 0.03). We showed XTL using mammograms improves the network performance compared to traditional TL, despite the difference in image characteristics (x-ray vs. MRI and 2D vs. 3D) and imaging tasks (classification vs. segmentation for one of the tasks).

Keywords