IEEE Open Journal of the Communications Society (Jan 2025)
Private Data Leakage in Federated Contrastive Learning Networks
Abstract
In next-generation wireless networks, distributed clients collaborate to achieve data perception, knowledge discovery, and model reasoning. Generally, Federated Contrastive Learning (FCL) represents an emerging approach for learning from decentralized unlabeled data while upholding data privacy. In FCL, participant clients collaborate in learning a global encoder using unlabeled data, which can serve as a versatile feature extractor for diverse downstream tasks. Nonetheless, FCL is susceptible to local data leakage risks, such as membership information leakage, stemming from its distributed nature, an aspect often overlooked in current solutions. This study delves into the feasibility of executing a membership information leakage on FCL and proposes a robust membership inference methodology. Our objective is to determine if the data signifies training member data by accessing the model’s inference output. Specifically, we concentrate on attackers situated within a client framework, lacking the capability to manipulate server-side aggregation methods or discern the training status of other clients. We introduce two membership inference attacks tailored for FCL: the passive membership inference attack and the active membership inference attack, contingent on the attacker’s involvement in local model training. Experimental findings across diverse datasets validate the effectiveness of our method and underscore the inherent local data risks associated with the FCL paradigm.
Keywords