Discover Internet of Things (Oct 2023)

Feature selection using differential evolution for microarray data classification

  • Sanjay Prajapati,
  • Himansu Das,
  • Mahendra Kumar Gourisaria

DOI
https://doi.org/10.1007/s43926-023-00042-5
Journal volume & issue
Vol. 3, no. 1
pp. 1 – 14

Abstract

Read online

Abstract The dimensions of microarray datasets are very large, containing noise and redundancy. The problem with microarray datasets is the presence of more features compared to the number of samples, which adversely affects algorithm performance. In other words, the number of columns exceeds the number of rows. Therefore, to extract precise information from microarray datasets, a robust technique is required. Microarray datasets play a critical role in detecting various diseases, including cancer and tumors. This is where feature selection techniques come into play. In recent times, feature selection (FS) has gained significant importance as a data preparation method, particularly for high-dimensional data. It is preferable to address classification problems with fewer features while maintaining high accuracy, as not all features are necessary to achieve this goal. The primary objective of feature selection is to identify the optimal subset of features. In this context, we will employ the Differential Evolution (DE) algorithm. DE is a population-based stochastic search approach that has found widespread use in various scientific and technical domains to solve optimization problems in continuous spaces. In our approach, we will combine DE with three different classification algorithms: Random Forest (RF), Decision Tree (DT), and Logistic Regression (LR). Our analysis will include a comparison of the accuracy achieved by each algorithmic model on each dataset, as well as the fitness error for each model. The results indicate that when feature selection was used the results were better compared to the results where the feature selection was not used.

Keywords