Complex & Intelligent Systems (Jul 2023)

Fedlabx: a practical and privacy-preserving framework for federated learning

  • Yuping Yan,
  • Mohammed B. M. Kamel,
  • Marcell Zoltay,
  • Marcell Gál,
  • Roland Hollós,
  • Yaochu Jin,
  • Ligeti Péter,
  • Ákos Tényi

DOI
https://doi.org/10.1007/s40747-023-01184-3
Journal volume & issue
Vol. 10, no. 1
pp. 677 – 690

Abstract

Read online

Abstract Federated learning (FL) draws attention in academia and industry due to its privacy-preserving capability in training machine learning models. However, there are still some critical security attacks and vulnerabilities, including gradients leakage and interference attacks. Meanwhile, communication is another bottleneck in basic FL schemes since large-scale FL parameter transmission leads to inefficient communication, latency, and slower learning processes. To overcome these shortcomings, different communication efficiency strategies and privacy-preserving cryptographic techniques have been proposed. However, a single method can only partially resist privacy attacks. This paper presents a practical, privacy-preserving scheme combining cryptographic techniques and communication networking solutions. We implement Kafka for message distribution, the Diffie–Hellman scheme for secure server aggregation, and gradient differential privacy for interference attack prevention. The proposed approach maintains training efficiency while being able to addressing gradients leakage problems and interference attacks. Meanwhile, the implementation of Kafka and Zookeeper provides asynchronous communication and anonymous authenticated computation with role-based access controls. Finally, we prove the privacy-preserving properties of the proposed solution via security analysis and empirically demonstrate its efficiency and practicality.

Keywords