Сучасні інформаційні системи (Nov 2018)
PARALLEL IMPLEMENTATION OF THE METHOD OF GRADIENT BOOSTING
Abstract
The issue of machine learning has been paying more attention in all areas of information technology in recent times. On the one hand, this is due to the rapid growth of requirements for future specialists, and on the other - with the very rapid development of information technology and Internet communications. One of the main tasks of e-learning is the task of classification. For this type of task, the method of machine learning called gradient boost is very well suited. Grading boosting is a family of powerful machine learning algorithms that have proven significant success in solving practical problems. These algorithms are very flexible and easily customized for the specific needs of the program, for example, they are studied in relation to different loss functions. The idea of boosting is the iterative process of sequential building of private models. Each new model learns based on information about errors made in the previous stage, and the resulting function is a linear combination of the whole ensemble of models, taking into account minimization of any penalty function. The mathematical apparatus of gradient boosting is well adapted for the solution of the classification problem. However, as the number of input data increases, the issue of reducing the construction time of the ensemble of decision trees becomes relevant. Using parallel computing systems and parallel programming technologies can produce positive results, but requires the development of new methods for constructing gradient boosting. The article reveals the main stages of the method of parallel construction of gradient boosting for solving the classification problem in e-learning. Unlike existing ones, the method allows to take into account the features of architecture and the organization of parallel processes in computing systems with shared and distributed memory. The method takes into account the possibility of evaluating the efficiency of building an ensemble of decision trees and parallel algorithms. Obtaining performance indicators for each iteration of the method helps to select the rational number of parallel processors in the computing system. This allows for a further reduction of the completion time of the gradient boosting. The simulation with the use of MPI parallel programming technology, the Python programming language for the architecture of the DM-MIMD system, confirms the reliability of the results. Here is an example of the organization of input data. Presented by Python is a program for constructing gradient boosting. The developed visualization of the obtained estimates of performance indicators allows the user to select the necessary configuration of the computing system.
Keywords