Scientific Reports (Feb 2021)

Dimensionality reduction using singular vectors

  • Majid Afshar,
  • Hamid Usefi

DOI
https://doi.org/10.1038/s41598-021-83150-y
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 13

Abstract

Read online

Abstract A common problem in machine learning and pattern recognition is the process of identifying the most relevant features, specifically in dealing with high-dimensional datasets in bioinformatics. In this paper, we propose a new feature selection method, called Singular-Vectors Feature Selection (SVFS). Let $$D= [A \mid \mathbf {b}]$$ D = [ A ∣ b ] be a labeled dataset, where $$\mathbf {b}$$ b is the class label and features (attributes) are columns of matrix A. We show that the signature matrix $$S_A=I-A^{\dagger }A$$ S A = I - A † A can be used to partition the columns of A into clusters so that columns in a cluster correlate only with the columns in the same cluster. In the first step, SVFS uses the signature matrix $$S_D$$ S D of D to find the cluster that contains $$\mathbf {b}$$ b . We reduce the size of A by discarding features in the other clusters as irrelevant features. In the next step, SVFS uses the signature matrix $$S_A$$ S A of reduced A to partition the remaining features into clusters and choose the most important features from each cluster. Even though SVFS works perfectly on synthetic datasets, comprehensive experiments on real world benchmark and genomic datasets shows that SVFS exhibits overall superior performance compared to the state-of-the-art feature selection methods in terms of accuracy, running time, and memory usage. A Python implementation of SVFS along with the datasets used in this paper are available at https://github.com/Majid1292/SVFS .