Intelligent Systems with Applications (Sep 2022)
A GPU-based accelerated ELM and deep-ELM training algorithms for traditional and deep neural networks classifiers
Abstract
The extreme learning machine (ELM) has been effectively used for training single-layer neural networks. In recent years, great attention has been paid to deep extreme learning machine (D-ELM) structures. Deep neural network structures are trained via the ELM method. Some stacked auto-encoders followed by a simple ELM layer can be usually used for solving classification and regression tasks. Although ELM has been employed for speeding up the training process of the neural network, D-ELM based models suffer from some issues such as the time complexity and running time.In this paper, we explore how the evaluation of ELM and D-ELM can be accelerated. GPUs are used to speed up the training process of ELM and D-ELM models. In the proposed method, three separate phases are considered for the algorithms. In the first phase, loading and pre-processing the data are performed serially in the CPU. In the second and third phases, which respectively are the training and testing phases of the algorithm, all the matrix operations of the algorithms are implemented in parallel mode using the GPU memory hierarchy. Also, having access to highly efficient computational libraries, additional support is provided for GPU-based parallel computing.In the simulation setup, five sets of the database are applied to train the ELM and D-ELM on both CPU and GPU platforms. The results obtained show the proposed approach based on GPUs can remarkable save running time. Although both serial and parallel methods measure approximately the same accuracy, the parallel methods provided for the models reduce the run time significantly.