网络与信息安全学报 (Jun 2024)
Verifiable federated aggregation method based on homomorphic proxy re-authentication
Abstract
Federated learning, capable of training models by sharing gradient parameters, faces the risk of dishonest data aggregation by malicious servers during the model aggregation process. Untrusted users participating in federated learning may also pose a threat by poisoning the global model, thereby compromising the reliability and security of the model training process. To address these issues, homomorphic proxy re-authentication was introduced for the first time to propose a bi-directional authentication method suitable for multi-party aggregation computation. Additionally, a privacy-preserving, efficient, and trustworthy federated learning aggregation method was constructed, combining the double mask technique. This method not only enables users to verify the correctness of global model aggregation results but also allows the aggregation server to assess the trustworthiness and model integrity of the model sources uploaded by the users. It prevents attackers from maliciously manipulating users to disrupt secure aggregation, without leaking users’ private data during the verification process. The security of the verifiable federated aggregation method was demonstrated through formal security analysis, which effectively resists forgery attacks and Sybil attacks, exhibiting good robustness. Simulation experiments further illustrated that the proposed method can achieve credible verification of the aggregation results without impacting federated training. Moreover, the verification process remains unaffected even if users quit in the middle.
Keywords