Jisuanji kexue (Sep 2022)

Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy

  • TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun

DOI
https://doi.org/10.11896/jsjkx.210800108
Journal volume & issue
Vol. 49, no. 9
pp. 297 – 305

Abstract

Read online

Federated learning provides a novel solution to collaborative learning among untrusted entities. Through a local-trai-ning-and-central-aggregation pattern,the federated learning algorithm trains a global model while protects local data privacy of each entity. However,recent studies show that local models uploaded by clients and global models produced by the server may still leak users' private information. Secure multi-party computation and differential privacy are two mainstream privacy-preserving techniques,which are used to protect the privacy of computation process and computation outputs respectively. There are few works that exploit the benefits of these two techniques at the same time. This paper proposes a privacy-preserving federated learning scheme for deep learning by combining secure multi-party computation and differential privacy. Clients add noise to local models,and secret share them to multiple servers. Servers aggregate these model shares by secure multi-party computation to obtain a private global model. The proposed scheme not only protects the privacy of local model updates uploaded by clients,but also prevents adversaries from inferring sensitive information from globally shared data such as aggregated models. The scheme also allows dropout of unstable clients and is compatible with complex aggregation functions. In addition,it can be naturally extended to the decentralized setting for real-world applications where no trusted centers exist. We implement our system in Python and Pytorch. Experiments validate that the proposed scheme achieves the same level of efficiency and accuracy as plaintext fede-rated learning.

Keywords