Complex & Intelligent Systems (Jul 2023)
Fedlabx: a practical and privacy-preserving framework for federated learning
Abstract
Abstract Federated learning (FL) draws attention in academia and industry due to its privacy-preserving capability in training machine learning models. However, there are still some critical security attacks and vulnerabilities, including gradients leakage and interference attacks. Meanwhile, communication is another bottleneck in basic FL schemes since large-scale FL parameter transmission leads to inefficient communication, latency, and slower learning processes. To overcome these shortcomings, different communication efficiency strategies and privacy-preserving cryptographic techniques have been proposed. However, a single method can only partially resist privacy attacks. This paper presents a practical, privacy-preserving scheme combining cryptographic techniques and communication networking solutions. We implement Kafka for message distribution, the Diffie–Hellman scheme for secure server aggregation, and gradient differential privacy for interference attack prevention. The proposed approach maintains training efficiency while being able to addressing gradients leakage problems and interference attacks. Meanwhile, the implementation of Kafka and Zookeeper provides asynchronous communication and anonymous authenticated computation with role-based access controls. Finally, we prove the privacy-preserving properties of the proposed solution via security analysis and empirically demonstrate its efficiency and practicality.
Keywords