IEEE Access (Jan 2020)

Accelerating the Training Process of Convolutional Neural Networks for Image Classification by Dropping Training Samples Out

  • Naisen Yang,
  • Hong Tang,
  • Jianwei Yue,
  • Xin Yang,
  • Zhihua Xu

DOI
https://doi.org/10.1109/ACCESS.2020.3013621
Journal volume & issue
Vol. 8
pp. 142393 – 142403

Abstract

Read online

Stochastic gradient descent and other adaptive optimization methods have been proved effective for training deep neural networks. Within each epoch of these methods, the whole training set is involved to train the model. In general, large training data sets have data redundancy among their training samples. In this paper, we present an algorithm to reduce the training time consumption of CNN by dropping certain samples out, which is called the greedy DropSample. For the absence of certain training samples, the distribution of networks’ activations is biased during training. By correcting the mean and variance of batch-normalization layers, this issue is solved. Experimental results over several data sets demonstrate the efficiency of the proposed method. The results show that this method could decrease the training time of multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) significantly. Despite the reduced number of training samples, the accuracies of networks are similar, or even better.

Keywords