IEEE Access (Jan 2023)

Performing and Evaluation of Deep Learning Models for Uterus Detection on Soft-Tissue Cadavers in Laparoscopic Gynecology

  • Apiwat Boonkong,
  • Kovit Khampitak,
  • Daranee Hormdee

DOI
https://doi.org/10.1109/ACCESS.2023.3293006
Journal volume & issue
Vol. 11
pp. 72027 – 72036

Abstract

Read online

Nowadays, with the current technological forces that have been shaping our bright future, one of these is Computer Vision. This statement is true across various matters, including laparoscopic gynecology, where computer-aided procedures for object recognition could offer surgeons the opportunity to ease up on on-going surgeries and/or to practice their surgical skills with offline surgeries. However, most of the previous work has been retrospective and focused on methodology from a computational viewpoint with minimal datasets showing how Computer Vision can be utilized for laparoscopic surgery. The main purpose of this paper is not just to evaluate state-of-the-art object detection models for uterus detection, but also to emphasize clinical application via the collaboration between surgeons and peopleware which is important in the further development and adoption of this technology, leading to improved clinical outcomes in Laparoscopic Gynecology. Two experiment phases have been conducted. Phase#1 applied 8 different Deep Learning models for uterus detection and were tested on the dataset, obtained from 42 public YouTube videos in Laparoscopic Gynecologic Surgery. In order to prove this new technology before performing on patients, and also due to the ethics of human experimentation, extensive testing on soft-tissue cadavers has been used, because theoretically, a soft-tissue cadaver is considered the closest to human in terms of shape and structure. Therefore Phase#2 has been performed on the best models from the first experiment phase serving a real-time streaming feed during 4 soft-tissue cadaver laparoscopic surgeries. Four models, pre-trained on the COCO 2017 Dataset on TensorFlow Model Zoo: CenterNet; EfficientDet; SSD; and Faster R-CNN; plus YOLOv4 on Darknet Framework, along with YOLOv4, YOLOv5 and YOLOv7 on Pytorch have been scrutinized here. The inference time (in FPS: Frame Per Second), F1-score and AP (Average Precision) have been used as evaluation metrics. The results exhibited that all 3 YOLOs on PyTorch outperformed all effectiveness metrics, including with great inference speed which is suitable for real-time surgeries. Lastly, a by-product but also useful contribution of this work, is the annotated dataset on uterus detection from both public videos and live feed on cadaver surgeries.

Keywords