IEEE Access (Jan 2021)
Adaptive Client Selection in Resource Constrained Federated Learning Systems: A Deep Reinforcement Learning Approach
Abstract
With data increasingly collected by end devices and the number of devices is growing rapidly in which data source mainly located outside the cloud today. To guarantee data privacy and remain data on client devices, federated learning (FL) has been proposed. In FL, end devices train a local model with their data and send the model parameters rather than raw data to server for aggregating a new global model. However, due to the limited wireless bandwidth and energy of mobile devices, it is not practical for FL to perform model updating and aggregation on all participating devices in parallel. And it is difficulty for FL server to select apposite clients to take part in model training which is important to save energy and reduce latency. In this paper, we establish a novel mobile edge computing (MEC) system for FL and propose an experience-driven control algorithm that adaptively chooses client devices to participate in each round of FL. Adaptive client selection mechanism in MEC can be modeled as a Markov Decision Process in which we do not need any prior knowledge of the environment. We then propose a client selection scheme based on reinforcement learning that learns to select a subset of devices in each communication round to minimize energy consumption and training delay that encourages the increase number of client devices to participate in model updating. The experimental results show that the unit of energy required in FL can be reduced by up to 50% and training delay required can be reduced by up to 20.70% compared to the other static algorithms. Finally, we demonstrate the scalability of MEC system with different tasks and the influence of different non independent and identically distributed (non-IID) settings.
Keywords