IEEE Access (Jan 2020)

Risk Minimization Against Transmission Failures of Federated Learning in Mobile Edge Networks

  • Yuting Yan,
  • Songhua Liu,
  • Yibo Jin,
  • Zhuzhong Qian,
  • Sheng Zhang,
  • Sanglu Lu

DOI
https://doi.org/10.1109/ACCESS.2020.2996307
Journal volume & issue
Vol. 8
pp. 98205 – 98217

Abstract

Read online

A variety of modern AI products essentially require raw user data for training diverse machine learning models. With the increasing concern on data privacy, federated learning, a decentralized learning framework, enables privacy-preserving training of models by iteratively aggregating model updates from participants, instead of aggregating raw data. Since all the participants, i.e., mobile devices, need to transfer their local model updates concurrently and iteratively over mobile edge networks, the network is easily overloaded, leading to a high risk of transmission failures. Although previous works on transmission protocols have already tried their best to avoid transmission collisions, the number of iterative concurrent transmissions should be fundamentally decreased. Inspired by the fact that raw data are often generated unevenly among devices, those devices with a small proportion of data could be properly excluded since they have little effect on the convergence of models. To further guarantee the accuracy of models, we propose to properly select a subset of devices as participants to ensure the given proportion of involved data. Correspondingly, we propose to minimize the risk against the transmission failures during model updates. Afterwards, we design a randomized algorithm ($ran$ RFL) to choose suitable participants by using a series of delicately calculated probabilities, and prove that the result is concentrated on its optimum with high probability. Extensive simulations show that through delicate participant selection, $ran$ RFL decreases the maximal error rate of model updates by up to 38.3% compared with the state-of-the-art schemas.

Keywords