Physics and Imaging in Radiation Oncology (Jan 2023)
Towards interactive deep-learning for tumour segmentation in head and neck cancer radiotherapy
Abstract
Background and purpose: With deep-learning, gross tumour volume (GTV) auto-segmentation has substantially been improved, but still substantial manual corrections are needed. With interactive deep-learning (iDL), manual corrections can be used to update a deep-learning tool while delineating, minimising the input to achieve acceptable segmentations. We present an iDL tool for GTV segmentation that took annotated slices as input and simulated its performance on a head and neck cancer (HNC) dataset. Materials and methods: Multimodal image data of 204 HNC patients with clinical tumour and lymph node GTV delineations were used. A baseline convolutional neural network (CNN) was trained (n = 107 training, n = 22 validation) and tested (n = 24). Subsequently, user input was simulated on initial test set by replacing one or more of predicted slices with ground truth delineation, followed by re-training the CNN. The objective was to optimise re-training parameters and simulate slice selection scenarios while limiting annotations to maximally-five slices. The remaining 51 patients were used as an independent test set, where Dice similarity coefficient (DSC), mean surface distance (MSD), and 95% Hausdorff distance (HD95%) were assessed at baseline and after every update. Results: Median segmentation accuracy at baseline was DSC = 0.65, MSD = 4.3 mm, HD95% = 17.5 mm. Updating CNN using three slices equally sampled from the craniocaudal axis of the GTV in the first round, followed by two rounds of annotating one extra slice, gave the best results. The accuracy improved to DSC = 0.82, MSD = 1.6 mm, HD95% = 4.8 mm. Every CNN update took 30 s. Conclusions: The presented iDL tool achieved substantial segmentation improvement with only five annotated slices.