IEEE Access (Jan 2020)

Optimal Feature Selection-Based Medical Image Classification Using Deep Learning Model in Internet of Medical Things

  • R. Joshua Samuel Raj,
  • S. Jeya Shobana,
  • Irina Valeryevna Pustokhina,
  • Denis Alexandrovich Pustokhin,
  • Deepak Gupta,
  • K. Shankar

DOI
https://doi.org/10.1109/ACCESS.2020.2981337
Journal volume & issue
Vol. 8
pp. 58006 – 58017

Abstract

Read online

Internet of Medical Things (IoMT) is the collection of medical devices and related applications which link the healthcare IT systems through online computer networks. In the field of diagnosis, medical image classification plays an important role in prediction and early diagnosis of critical diseases. Medical images form an indispensable part of a patient's health record which can be applied to control, handle and treat the diseases. But, classification of images is a challenging task in computer-based diagnostics. In this research article, we have introduced a improved classifier i.e., Optimal Deep Learning (DL) for classification of lung cancer, brain image, and Alzheimer's disease. The researchers proposed the Optimal Feature Selection based Medical Image Classification using DL model by incorporating preprocessing, feature selection and classification. The main goal of the paper is to derive an optimal feature selection model for effective medical image classification. To enhance the performance of the DL classifier, Opposition-based Crow Search (OCS) algorithm is proposed. The OCS algorithm picks the optimal features from pre-processed images, here Multi-texture, grey level features were selected for the analysis. Finally, the optimal features improved the classification result and increased the accuracy, specificity and sensitivity in the diagnosis of medical images. The proposed results were implemented in MATLAB and compared with existing feature selection models and other classification approaches. The proposed model achieved the maximum performance in terms of accuracy, sensitivity and specificity being 95.22%, 86.45 % and 100% for the applied set of images.

Keywords