Healthcare Technology Letters (Aug 2017)

Can surgical simulation be used to train detection and classification of neural networks?

  • Odysseas Zisimopoulos,
  • Evangello Flouty,
  • Mark Stacey,
  • Sam Muscroft,
  • Petros Giataganas,
  • Jean Nehme,
  • Andre Chow,
  • Danail Stoyanov

DOI
https://doi.org/10.1049/htl.2017.0064

Abstract

Read online

Computer-assisted interventions (CAI) aim to increase the effectiveness, precision and repeatability of procedures to improve surgical outcomes. The presence and motion of surgical tools is a key information input for CAI surgical phase recognition algorithms. Vision-based tool detection and recognition approaches are an attractive solution and can be designed to take advantage of the powerful deep learning paradigm that is rapidly advancing image recognition and classification. The challenge for such algorithms is the availability and quality of labelled data used for training. In this Letter, surgical simulation is used to train tool detection and segmentation based on deep convolutional neural networks and generative adversarial networks. The authors experiment with two network architectures for image segmentation in tool classes commonly encountered during cataract surgery. A commercially-available simulator is used to create a simulated cataract dataset for training models prior to performing transfer learning on real surgical data. To the best of authors’ knowledge, this is the first attempt to train deep learning models for surgical instrument detection on simulated data while demonstrating promising results to generalise on real data. Results indicate that simulated data does have some potential for training advanced classification methods for CAI systems.

Keywords