BMC Oral Health (Aug 2024)

A hierarchical deep learning approach for diagnosing impacted canine-induced root resorption via cone-beam computed tomography

  • Zeynab Pirayesh,
  • Hossein Mohammad-Rahimi,
  • Saeed Reza Motamedian,
  • Sepehr Amini Afshar,
  • Reza Abbasi,
  • Mohammad Hossein Rohban,
  • Mina Mahdian,
  • Mitra Ghazizadeh Ahsaie,
  • Mina Iranparvar Alamdari

DOI
https://doi.org/10.1186/s12903-024-04718-4
Journal volume & issue
Vol. 24, no. 1
pp. 1 – 11

Abstract

Read online

Abstract Objectives Canine-induced root resorption (CIRR) is caused by impacted canines and CBCT images have shown to be more accurate in diagnosing CIRR than panoramic and periapical radiographs with the reported AUCs being 0.95, 0.49, and 0.57, respectively. The aim of this study was to use deep learning to automatically evaluate the diagnosis of CIRR in maxillary incisors using CBCT images. Methods A total of 50 cone beam computed tomography (CBCT) images and 176 incisors were selected for the present study. The maxillary incisors were manually segmented and labeled from the CBCT images by two independent radiologists as either healthy or affected by root resorption induced by the impacted canines. We used five different strategies for training the model: (A) classification using 3D ResNet50 (Baseline), (B) classification of the segmented masks using the outcome of a 3D U-Net pretrained on the 3D MNIST, (C) training a 3D U-Net for the segmentation task and use its outputs for classification, (D) pretraining a 3D U-Net for the segmentation and transfer of the model, and (E) pretraining a 3D U-Net for the segmentation and fine-tuning the model with only the model encoder. The segmentation models were evaluated using the mean intersection over union (mIoU) and Dice coefficient (DSC). The classification models were evaluated in terms of classification accuracy, precision, recall, and F1 score. Results The segmentation model achieved a mean intersection over union (mIoU) of 0.641 and a DSC of 0.901, indicating good performance in segmenting the tooth structures from the CBCT images. For the main classification task of detecting CIRR, Model C (classification of the segmented masks using 3D ResNet) and Model E (pretraining on segmentation followed by fine-tuning for classification) performed the best, both achieving 82% classification accuracy and 0.62 F1-scores on the test set. These results demonstrate the effectiveness of the proposed hierarchical, data-efficient deep learning approaches in improving the accuracy of automated CIRR diagnosis from limited CBCT data compared to the 3D ResNet baseline model. Conclusion The proposed approaches are effective at improving the accuracy of classification tasks and are helpful when the diagnosis is based on the volume and boundaries of an object. While the study demonstrated promising results, future studies with larger sample size are required to validate the effectiveness of the proposed method in enhancing the medical image classification tasks.

Keywords