PLoS ONE (Jan 2022)

Data augmentation based on multiple oversampling fusion for medical image segmentation

  • Liangsheng Wu,
  • Jiajun Zhuang,
  • Weizhao Chen,
  • Yu Tang,
  • Chaojun Hou,
  • Chentong Li,
  • Zhenyu Zhong,
  • Shaoming Luo

Journal volume & issue
Vol. 17, no. 10

Abstract

Read online

A high-performance medical image segmentation model based on deep learning depends on the availability of large amounts of annotated training data. However, it is not trivial to obtain sufficient annotated medical images. Generally, the small size of most tissue lesions, e.g., pulmonary nodules and liver tumours, could worsen the class imbalance problem in medical image segmentation. In this study, we propose a multidimensional data augmentation method combining affine transform and random oversampling. The training data is first expanded by affine transformation combined with random oversampling to improve the prior data distribution of small objects and the diversity of samples. Secondly, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the lesion pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The LUNA16 and LiTS17 datasets were introduced to evaluate the performance of our works, where four deep neural network models, Mask-RCNN, U-Net, SegNet and DeepLabv3+, were adopted for small tissue lesion segmentation in CT images. In addition, the small tissue segmentation performance of the four different deep learning architectures on both datasets could be greatly improved by incorporating the data augmentation strategy. The best pixelwise segmentation performance for both pulmonary nodules and liver tumours was obtained by the Mask-RCNN model, with DSC values of 0.829 and 0.879, respectively, which were similar to those of state-of-the-art methods.