IEEE Access (Jan 2023)

P4FL: An Architecture for Federating Learning With In-Network Processing

  • Alessio Sacco,
  • Antonino Angi,
  • Guido Marchetto,
  • Flavio Esposito

DOI
https://doi.org/10.1109/ACCESS.2023.3318109
Journal volume & issue
Vol. 11
pp. 103650 – 103658

Abstract

Read online

The unceasing development of Artificial Intelligence (AI) and Machine Learning (ML) techniques is growing with privacy problems related to the training data. A relatively recent approach to partially cope with such concerns is Federated Learning (FL), a technique in which only the parameters of the trained neural network models are transferred rather than data. Despite the benefits that FL may provide, such an approach can lead to synchronization issues (especially when applied in the context of numerous IoT devices), the network and the server may turn into bottlenecks, and the load may become unsustainable for some nodes. To solve this issue and reduce the traffic on the network, in this paper, we propose P4FL, a novel FL architecture that uses the paradigm of network programmability to program P4 switches to compute intermediate aggregations. In particular, we defined a custom in-band protocol based on MPLS to carry the model parameters and adapted the P4 switch behavior to aggregate model gradients. We then evaluated P4FL in Mininet and verified that using network nodes for in-network model caching and gradient aggregating has two advantages: first, it alleviates the bottleneck effect of the central FL server; second, it further accelerates the entire training progress.

Keywords