Scientific Reports (Jan 2022)

Asymmetry between right and left fundus images identified using convolutional neural networks

  • Tae Seen Kang,
  • Bum Jun Kim,
  • Ki Yup Nam,
  • Seongjin Lee,
  • Kyonghoon Kim,
  • Woong-sub Lee,
  • Jinhyun Kim,
  • Yong Soep Han

DOI
https://doi.org/10.1038/s41598-021-04323-3
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 8

Abstract

Read online

Abstract We analyzed fundus images to identify whether convolutional neural networks (CNNs) can discriminate between right and left fundus images. We gathered 98,038 fundus photographs from the Gyeongsang National University Changwon Hospital, South Korea, and augmented these with the Ocular Disease Intelligent Recognition dataset. We created eight combinations of image sets to train CNNs. Class activation mapping was used to identify the discriminative image regions used by the CNNs. CNNs identified right and left fundus images with high accuracy (more than 99.3% in the Gyeongsang National University Changwon Hospital dataset and 91.1% in the Ocular Disease Intelligent Recognition dataset) regardless of whether the images were flipped horizontally. The depth and complexity of the CNN affected the accuracy (DenseNet121: 99.91%, ResNet50: 99.86%, and VGG19: 99.37%). DenseNet121 did not discriminate images composed of only left eyes (55.1%, p = 0.548). Class activation mapping identified the macula as the discriminative region used by the CNNs. Several previous studies used the flipping method to augment data in fundus photographs. However, such photographs are distinct from non-flipped images. This asymmetry could result in undesired bias in machine learning. Therefore, when developing a CNN with fundus photographs, care should be taken when applying data augmentation with flipping.