IEEE Access (Jan 2023)

Fusion of Textural and Visual Information for Medical Image Modality Retrieval Using Deep Learning-Based Feature Engineering

  • Saeed Iqbal,
  • Adnan N. Qureshi,
  • Musaed Alhussein,
  • Imran Arshad Choudhry,
  • Khursheed Aurangzeb,
  • Tariq M. Khan

DOI
https://doi.org/10.1109/ACCESS.2023.3310245
Journal volume & issue
Vol. 11
pp. 93238 – 93253

Abstract

Read online

Medical image retrieval is essential to modern medical treatment because it enables doctors to diagnose and treat a variety of illnesses. In this study, we present an innovative technique for selecting the methodology of medical images by combining textural and visual information. Knowing the imaging process behind an idea, such as a chest X-ray, skin dermatology, or breast histopathology image, may be extremely helpful to healthcare professionals since it can aid in image investigation and provide important information about the imaging technique used. We use deep learning-based feature engineering to do this, using both the textural and visual components of healthcare images. We extract detailed visual information from the images using a predefined Convolutional Neural Network (CNN). The Global-Local Pyramid Pattern (GLPP), Zernike moments, and Haralick are also used to physically separate the pertinent parts from the images’ other visual and factual aspects. These essential characteristics, such as image modality and imaging technique-specific characteristics, provide additional information about the technology. We employ a feature fusion method that incorporates the depictions obtained from the two modalities in order to combine the textural and visual elements. This fusion process, which improves the discrimination capacity of the feature vectors, makes accurate modality classification possible. We conducted trials on a sizable dataset consisting of various medical images to assess the effectiveness of our proposed method. The results indicate that, in comparison to conventional methods, our technique outperforms modality retrieval, with a precision of 95.89 and a recall of 96.31. The accuracy and robustness of the classification task are greatly creased by the combination of textural and visual data. Through the integration of textural and visual information, our work offers a unique method for recovering the modality of medical images. This method has the potential to greatly improve the speed and accuracy of medical image processing and diagnosis by helping experts rapidly and accurately identify the imaging technology being utilized.

Keywords