Advances in Sciences and Technology (Dec 2018)

Optimization of Machine Learning Process Using Parallel Computing

  • Michal Kazimierz Grzeszczyk

DOI
https://doi.org/10.12913/22998624/100341
Journal volume & issue
Vol. 12, no. 4
pp. 81 – 87

Abstract

Read online

The aim of this paper is to discuss the use of parallel computing in the supervised machine learning processes to reduce computation time. This way of computing has gained popularity because sequential computing is often not sufficient enough for large scale problems like complex simulations or real time tasks. The author after presenting foundations of machine learning and neural network algorithms as well as three types of parallel models briefly characterized the development of the experiments carried out and the results obtained. Experiments on image recognition, run on five sets of empirical data, prove a significant reduction in calculation time compared to classical algorithms. At the end, possible directions of further research concerning parallel optimization of calculation time in the supervised perceptron learning processes were shortly outlined.

Keywords