Journal of King Saud University: Computer and Information Sciences (Jan 2024)

Towards addressing aggregation deviation for model training in resource-scarce edge environment

  • Qiaoyun Yin,
  • Zhiyong Feng,
  • Shizhan Chen,
  • Hongyue Wu,
  • Gaoyong Han

Journal volume & issue
Vol. 36, no. 1
p. 101912

Abstract

Read online

Federated Learning (FL) can train models in an edge environment without sending raw data. However, the performance is still constrained by data heterogeneity. To address the problems of data heterogeneity and resource scarcity in edge devices, we propose Federal Learning via Dynamic Aggregation (FedDA), which eliminates the influence of data heterogeneity and improves model accuracy. FedDA updates the impact of individual local models on the global model in real-time at different stages. It adjusts the local epoch in each round to prevent the device from dropping out while obtaining a more accurate local model. The core module is the model impact factor (MIF) that inscribes the aggregation weights to solve the impact of fixed weights on the aggregation model with improper extraction of local information. We conducted several experiments to evaluate the convergence speed using different algorithms on the MINIST. FedDA consistently outperforms the other six SOTA algorithms on MNIST, Cifar10, and Cifar100 datasets. In significant data heterogeneity, FedDA improves accuracy by up to 6% over the different algorithms and at least about 3%, especially in resource-scarce environments. To reach the specified accuracy, FedDA is 3 times faster than SCAFFOLD and at least 50% faster than other algorithms.

Keywords