Mathematics (Sep 2024)
A Secure and Fair Federated Learning Framework Based on Consensus Incentive Mechanism
Abstract
Federated learning facilitates collaborative computation among multiple participants while safeguarding user privacy. However, current federated learning algorithms operate under the assumption that all participants are trustworthy and their systems are secure. Nonetheless, real-world scenarios present several challenges: (1) Malicious clients disrupt federated learning through model poisoning and data poisoning attacks. Although some research has proposed secure aggregation methods to address this issue, many methods have inherent limitations. (2) Clients may refuse or passively participate in the training process due to considerations of self-interest, and may even interfere with the training process due to competitive relationships. To overcome these obstacles, we have devised a reliable federated framework aimed at ensuring secure computing throughout the entirety of federated task processes. Initially, we propose a method for detecting malicious models to safeguard the integrity of model aggregation. Furthermore, we have proposed a fair contribution assessment method and awarded the right to write blocks to the creator of the optimal model, ensuring the active participation of participants in both local training and model aggregation. Finally, we establish a computational framework grounded in blockchain and smart contracts to uphold the integrity and fairness of federated tasks. To assess the efficacy of our framework, we conduct simulations involving various types of client attacks and contribution assessment scenarios using multiple open-source datasets. Results from these experiments demonstrate that our framework effectively ensures the credibility of federated tasks while achieving impartial evaluation of client contributions.
Keywords