Journal of Ophthalmology (Jan 2016)

Man versus Machine: Software Training for Surgeons—An Objective Evaluation of Human and Computer-Based Training Tools for Cataract Surgical Performance

  • Nizar Din,
  • Phillip Smith,
  • Krisztina Emeriewen,
  • Anant Sharma,
  • Simon Jones,
  • James Wawrzynski,
  • Hongying Tang,
  • Paul Sullivan,
  • Silvestro Caputo,
  • George M. Saleh

DOI
https://doi.org/10.1155/2016/3548039
Journal volume & issue
Vol. 2016

Abstract

Read online

This study aimed to address two queries: firstly, the relationship between two cataract surgical feedback tools for training, one human and one software based, and, secondly, evaluating microscope control during phacoemulsification using the software. Videos of surgeons with varying experience were enrolled and independently scored with the validated PhacoTrack motion capture software and the Objective Structured Assessment of Cataract Surgical Skill (OSACCS) human scoring tool. Microscope centration and path length travelled were also evaluated with the PhacoTrack software. Twenty-two videos correlated PhacoTrack motion capture with OSACCS. The PhacoTrack path length, number of movements, and total procedure time were found to have high levels of Spearman’s rank correlation of -0.6792619 (p=0.001), -0.6652021 (p=0.002), and -0.771529 (p=0001), respectively, with OSACCS. Sixty-two videos evaluated microscope camera control. Novice surgeons had their camera off the pupil centre at a far greater mean distance (SD) of 6.9 (3.3) mm, compared with experts of 3.6 (1.6) mm (p≪0.05). The expert surgeons maintained good microscope camera control and limited total pupil path length travelled 2512 (1031) mm compared with novices of 4049 (2709) mm (p≪0.05). Good agreement between human and machine quantified measurements of surgical skill exists. Our results demonstrate that surrogate markers for camera control are predictors of surgical skills.