Mathematics (Nov 2022)

An Energy Efficient Specializing DAG Federated Learning Based on Event-Triggered Communication

  • Xiaofeng Xue,
  • Haokun Mao,
  • Qiong Li,
  • Furong Huang,
  • Ahmed A. Abd El-Latif

DOI
https://doi.org/10.3390/math10224388
Journal volume & issue
Vol. 10, no. 22
p. 4388

Abstract

Read online

Specializing Directed Acyclic Graph Federated Learning (SDAGFL) is a new federated learning framework with the advantages of decentralization, personalization, resisting a single point of failure, and poisoning attack. Instead of training a single global model, the clients in SDAGFL update their models asynchronously from the devices with similar data distribution through Directed Acyclic Graph Distributed Ledger Technology (DAG-DLT), which is designed for IoT scenarios. Because of many the features inherited from DAG-DLT, SDAGFL is suitable for IoT scenarios in many aspects. However, the training process of SDAGFL is quite energy consuming, in which each client needs to compute the confidence and rating of the nodes selected by multiple random walks by traveling the ledger with 15–25 depth to obtain the “reference model” to judge whether or not to broadcast the newly trained model. As we know, the energy consumption is an important issue for IoT scenarios, as most devices are battery-powered with strict energy restrictions. To optimize SDAGFL for IoT, an energy-efficient SDAGFL based on an event-triggered communication mechanism, i.e., ESDAGFL, is proposed in this paper. In ESDAGFL, the new model is broadcasted only in the event that the new model is significantly different from the previous one, instead of traveling the ledger to search for the “reference model”. We evaluate the ESDAGFL on the FMNIST-clustered and Poets dataset. The simulation is performed on a platform with Intel®CoreTM i7-10700 CPU (CA, USA). The simulation results demonstrate that ESDAGFL can reach a balance between training accuracy and specialization as good as SDAGFL. What is more, ESDAGFL can reduce the energy consumption by 42.5% and 51.7% for the FMNIST-clustered and Poets datasets, respectively.

Keywords