IET Image Processing (May 2022)

The local ternary pattern encoder–decoder neural network for dental image segmentation

  • Omran Salih,
  • Kevin Jan Duffy

DOI
https://doi.org/10.1049/ipr2.12416
Journal volume & issue
Vol. 16, no. 6
pp. 1520 – 1530

Abstract

Read online

Abstract Recent advances in medical imaging analyses, especially the use of deep learning, are helping to identify, detect, classify, and quantify patterns in radiographs. At the centre of these advances is the ability to explore hierarchical feature representations learned from data. Deep learning is invaluably becoming the most sought out technique, leading to enhanced performances in the analysis of medical applications and systems. Deep learning techniques have achieved improved performance results in dental image segmentation. Segmentation of dental radiographs is a crucial step that helps dentists to diagnose dental caries. However, the performance of the deep networks used for these analyses are restrained by various challenging features found in dental carious lesions. Segmentation of dental images is often difficult due to the vast variety of types of topology, intricacies of medical structure and poor image quality caused by conditions such as low contrast, noise, irregular, and fuzzy border edges. These issues are exacerbated by low numbers of data images available for any particular analysis. A robust local ternary pattern encoder–decoder network (LTPEDN) is proposed to overcome dental image segmentation challenges and minimise the computational resources required. This new architecture is a modification of existing methods using an LTP. Images are preprocessed via augmentation and normalisation techniques to increase and prepare the datasets. Thereafter, the dataset input is sent to the LTPEDN for training and testing the model. Segmentation is performed using the non‐learnable layers (the LTP layers) and the learnable layers (standard convolution layers), to extract the ROI of the teeth. The method was evaluated on an augmented dataset of 11, 000 dental images. It was trained on 8, 800 training set images and tested on 2, 200 testing set images. The new method is shown to be 94.32% accurate.