IEEE Access (Jan 2024)
Enhancing Global Model Performance in Federated Learning With Non-IID Data Using a Data-Free Generative Diffusion Model
Abstract
Federated Learning (FL) presents a decentralized approach to machine learning, allowing multiple clients to jointly train neural networks while maintaining the privacy of their local data. However, FL faces challenges due to data heterogeneity, leading to slow convergence and reduced performance. While sharing client information can mitigate data heterogeneity, it poses a dilemma between privacy preservation and model performance improvement. This study aims to tackle the challenge of data heterogeneity, particularly for Non-Identical and Independent Distributions (Non-IID) clients, by enhancing the global model. We propose a data-free knowledge distillation method (FedDiff) to fine-tune the global model on the server. FedDiff leverages a Diffusion model as a generator to explore the input space of local models and transfer knowledge from local models to the global one. Additionally, we customize the diffusion model’s data generative scheme to reduce training time. Extensive experiments demonstrate that FedDiff reduces communication rounds between clients and the server by up to 57% for CIFAR-10 classification and up to 71% for CIFAR-100 on average, compared to other state-of-the-art FL methods, while maintaining the same level of accuracy. This makes it particularly suitable for low-power devices with limitations on data transmission and reception, such as satellites and medical care devices.Furthermore, it preserves better average accuracy for all clients at the end of the training phase.
Keywords