IEEE Access (Jan 2024)
VPPFL: Verifiable Privacy-Preserving Federated Learning in Cloud Environment
Abstract
As a distributed machine learning paradigm, federated learning has attracted wide attention from academia and industry by enabling multiple users to jointly train models without sharing local data. However, federated learning still faces various security and privacy issues. First, even if users only upload gradients, their privacy information may still be leaked. Second, when the aggregation server intentionally returns fabricated results, the model’s performance may be degraded. To address the above issues, we propose a verifiable privacy-preserving federated learning scheme VPPFL against semi-malicious cloud server. We use threshold multi-key homomorphic encryption to protect local gradients, and construct a one-way function to enable the users to independently verify the aggregation results. Furthermore, our scheme supports a small portion of users dropout during the training process. Finally, we conduct simulation experiments on the MNIST dataset, demonstrating that VPPFL can correctly and effectively complete training and achieve privacy protection.
Keywords