Machine Learning and Knowledge Extraction (Jan 2020)

Statistical Aspects of High-Dimensional Sparse Artificial Neural Network Models

  • Kaixu Yang,
  • Tapabrata Maiti

DOI
https://doi.org/10.3390/make2010001
Journal volume & issue
Vol. 2, no. 1
pp. 1 – 19

Abstract

Read online

An artificial neural network (ANN) is an automatic way of capturing linear and nonlinear correlations, spatial and other structural dependence among features. This machine performs well in many application areas such as classification and prediction from magnetic resonance imaging, spatial data and computer vision tasks. Most commonly used ANNs assume the availability of large training data compared to the dimension of feature vector. However, in modern applications, as mentioned above, the training sample sizes are often low, and may be even lower than the dimension of feature vector. In this paper, we consider a single layer ANN classification model that is suitable for analyzing high-dimensional low sample-size (HDLSS) data. We investigate the theoretical properties of the sparse group lasso regularized neural network and show that under mild conditions, the classification risk converges to the optimal Bayes classifier’s risk (universal consistency). Moreover, we proposed a variation on the regularization term. A few examples in popular research fields are also provided to illustrate the theory and methods.

Keywords