IEEE Access (Jan 2020)

Efficient Lung Nodule Classification Using Transferable Texture Convolutional Neural Network

  • Imdad Ali,
  • Muhammad Muzammil,
  • Ihsan Ul Haq,
  • Muhammad Amir,
  • Suheel Abdullah

DOI
https://doi.org/10.1109/ACCESS.2020.3026080
Journal volume & issue
Vol. 8
pp. 175859 – 175870

Abstract

Read online

Lung nodules are vital indicators for the presence of lung cancer. An early detection enhances the survival rate of the patient by starting treatment at the right time. The detection and classification of malignancy in Computed Tomography (CT) images is a very time-consuming and difficult task for radiologists which lead the researchers to develop algorithms for Computer-Aided Diagnosis (CAD) systems to mitigate this burden. The performance of CAD systems is continuously improving by using various deep learning techniques for screening of lung cancer. In this paper, we proposed transferable texture Convolutional Neural Networks (CNN) to improve the classification performance of pulmonary nodules in CT scans. An Energy Layer (EL) is incorporated in our scheme, which extracts texture features from the convolutional layer. The inclusion of EL reduces the number of learnable parameters of the network, which further reduces the memory requirements and computational complexity. The proposed model has only three convolutional layers and one EL, instead of pooling layer. Overall proposed CNN architecture comprises of nine layers for automatic feature extraction and classification of pulmonary nodule candidates as malignant or benign. Furthermore, the pre-trained model of proposed CNN is also used to handle the smaller dataset classification problem by using transfer learning. This work has been evaluated on publicly available LIDC-IDRI and the LUNGx Challenge database through different evaluation matrices, such as; the accuracy, specificity, error rate and AUC. The proposed model is trained by six-fold cross-validation and achieved an accuracy score of 96.69%±0.72% with only 3.30%±0.72% error rate. Whereas, the measured AUC and recall is 99.11%±0.45% and 97.19%±0.57%, respectively. Moreover, we also tested our proposed technique on the MNIST dataset and achieved state-of-the-art results in terms of accuracy and error rate.

Keywords