Applied Sciences (Feb 2023)

MDIFL: Robust Federated Learning Based on Malicious Detection and Incentives

  • Ruolan Wu,
  • Yuling Chen,
  • Chaoyue Tan,
  • Yun Luo

DOI
https://doi.org/10.3390/app13052793
Journal volume & issue
Vol. 13, no. 5
p. 2793

Abstract

Read online

Federated Learning (FL) is an emerging distributed framework that enables clients to conduct distributed learning and globally share models without requiring data to leave the local. In the FL process, participants are required to contribute data resources and computing resources for model training. However, the traditional FL lacks security guarantees and is vulnerable to attacks and damages by malicious adversaries. In addition, the existing incentive methods lack fairness to participants. Therefore, accurately identifying and preventing malicious nodes from doing evil, while effectively selecting and incentivizing participants, plays a vital role in improving the security and performance of FL. In this paper, we propose a Robust Federated Learning Based on Malicious Detection and Incentives (MDIFL). Specifically, MDIFL first uses a gradient similarity to calculate reputation, thereby maintaining the reputation of participants and identifying malicious opponents, and then designs an effective incentive mechanism based on contract theory to achieve collaborative fairness. Extensive experimental results demonstrate that the proposed MDIFL can not only preferentially select and effectively motivate high-quality participants, but also correctly identify malicious adversaries, achieve fairness, and improve model performance.

Keywords