Dianzi Jishu Yingyong (Mar 2023)

Distributed training method for deep neural networks

  • Yuan Ye,
  • Tian Yuan,
  • Jiang Qibing

DOI
https://doi.org/10.16157/j.issn.0258-7998.223244
Journal volume & issue
Vol. 49, no. 3
pp. 48 – 53

Abstract

Read online

Abstract: Deep neural networks have achieved great success in classification and prediction of high-dimensional data. Training deep neural networks is a data-intensive task, which needs to collect large-scale data from multiple data sources. These data usually contain sensitive information, which makes the training process of convolutional neural networks easy to leak data privacy. Aiming at the problems of data privacy and communication cost in the training process, this paper proposes a distributed training method for deep neural networks, which allows to jointly learn deep neural networks based on multiple data sources. Firstly, a distributed training architecture is proposed, which is composed of one computing center and multiple agents. Secondly, a distributed training algorithm based on multiple data sources is proposed, which allows to distributed jointly train models through the splitting of convolutional neural networks under the constraints that raw data are not shared directly and the communication cost is reduced. Thirdly, the correctness of the algorithm is analyzed. Finally, the experimental results show that our method is effective.

Keywords