IEEE Access (Jan 2023)

Real-Time Identification of Malignant Biliary Strictures on Cholangioscopy Images Using Explainable Convolutional Neural Networks With Heatmaps

  • Passakron Phuangthongkham,
  • Phonthep Angsuwatcharakon,
  • Santi Kulpatcharapong,
  • Peerapon Vateekul,
  • Rungsun Rerknimitr

DOI
https://doi.org/10.1109/ACCESS.2023.3276642
Journal volume & issue
Vol. 11
pp. 49943 – 49956

Abstract

Read online

Determination of benign or malignant etiology of bile duct strictures is difficult. Currently, a digital single-operator cholangioscopy allows endoscopists to examine the bile duct with greater accuracy. Consequently, lesions in the bile duct can be detected directly with their own eyes. However, there is still inconsistency of diagnosis among endoscopists. Therefore, a biopsy is usually considered as a gold standard. Thus, pieces of tissue are taken from the bile duct and undergo examination. A false negative of cancer diagnosis from a sampling error of biopsy leads to the need to repeat procedures. In this work, we propose a convolutional neural network specifically designed for classifying malignant biliary strictures in real-time. In our development, this model can generate output not only for classification but also for a class activation map, which relies solely on an image-level label rather than annotation position. The model mainly focuses on tissues, not on equipment e.g. guide wire. To increase the number of cholangioscopy images, and solve the issue of equipment, “a guide-wire augmentation” is invented. To employ a video inference, our model for still images is further modified. In our experiment, all models are run on 3 patient-based bootstraps. The data contains 104 patients’ records collected at King Chulalongkorn Memorial Hospital with 885 images: 447 malignant and 438 benign images. The model’s performance for still images is 0.8577 and 0.8395 in terms of sensitivity and F1, respectively. With our video inference algorithm, the sensitivity of the model attains 0.9024 and the F1-score achieves 0.9193. Finally, the model can facilitate real-time inference with a speed of 83 frames per second.

Keywords