SoftwareX (Sep 2024)
Federated learning secure model: A framework for malicious clients detection
Abstract
The Federated Learning Secure Model Repository presents a novel paradigm to ensure the trustworthiness of machine learning models generated through federated learning. It employs advanced algorithms for real-time detection of malicious contributions, supplemented by varied model auditing methods. By leveraging continuous monitoring, the repository guarantees the integrity and security of shared models, thereby mitigating the risks posed by malicious actors. This innovative approach fortifies the collaborative nature of federated learning while safeguarding against potential model poisoning and maintaining the reliability of the shared model ecosystem.