Scientific Reports (Oct 2024)

Importance estimate of features via analysis of their weight and gradient profile

  • Ho Tung Jeremy Chan,
  • Eduardo Veas

DOI
https://doi.org/10.1038/s41598-024-72640-4
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 21

Abstract

Read online

Abstract Understanding what is important and redundant within data can improve the modelling process of neural networks by reducing unnecessary model complexity, training time and memory storage. This information is however not always priorly available nor trivial to obtain from neural networks. There are existing feature selection methods which utilise the internal working of a neural network for selection, however further analysis and interpretation of the input features’ significance is often limiting. We propose an approach that offers an extension that estimates the significance of features by analysing the gradient descent of a pairwise layer within a model. The changes that occur with the weights and gradients throughout training provide a profile that can be used to better understand the importance hierarchy between the features for ranking and feature selection. Additionally, this method is transferable to existing fully or partially trained models, which is beneficial for understanding existing or active models. The proposed approach is demonstrated empirically with a study which uses benchmark datasets from libraries such as MNIST and scikit-feat, as well as a simulated dataset and an applied real world dataset. This is verified with the ground truth where available, and if not, via a comparison of fundamental feature selection methods, which includes existing statistical based and embedded neural network based feature selection methods through the methodology of Reduce and Retrain.