Applied Sciences (Jun 2020)

Towards a Better Understanding of Transfer Learning for Medical Imaging: A Case Study

  • Laith Alzubaidi,
  • Mohammed A. Fadhel,
  • Omran Al-Shamma,
  • Jinglan Zhang,
  • J. Santamaría,
  • Ye Duan,
  • Sameer R. Oleiwi

DOI
https://doi.org/10.3390/app10134523
Journal volume & issue
Vol. 10, no. 13
p. 4523

Abstract

Read online

One of the main challenges of employing deep learning models in the field of medicine is a lack of training data due to difficulty in collecting and labeling data, which needs to be performed by experts. To overcome this drawback, transfer learning (TL) has been utilized to solve several medical imaging tasks using pre-trained state-of-the-art models from the ImageNet dataset. However, there are primary divergences in data features, sizes, and task characteristics between the natural image classification and the targeted medical imaging tasks. Therefore, TL can slightly improve performance if the source domain is completely different from the target domain. In this paper, we explore the benefit of TL from the same and different domains of the target tasks. To do so, we designed a deep convolutional neural network (DCNN) model that integrates three ideas including traditional and parallel convolutional layers and residual connections along with global average pooling. We trained the proposed model against several scenarios. We utilized the same and different domain TL with the diabetic foot ulcer (DFU) classification task and with the animal classification task. We have empirically shown that the source of TL from the same domain can significantly improve the performance considering a reduced number of images in the same domain of the target dataset. The proposed model with the DFU dataset achieved F1-score value of 86.6% when trained from scratch, 89.4% with TL from a different domain of the targeted dataset, and 97.6% with TL from the same domain of the targeted dataset.

Keywords