Journal of King Saud University: Computer and Information Sciences (Mar 2024)

RRFL: A rational and reliable federated learning incentive framework for mobile crowdsensing

  • Qingyi He,
  • Youliang Tian,
  • Shuai Wang,
  • Jinbo Xiong

Journal volume & issue
Vol. 36, no. 3
p. 101977

Abstract

Read online

The data privacy concern for mobile users (MUs) in mobile crowdsensing (MCS) has attracted significant attention. Federated Learning (FL) breaks down data silos, enabling MUs to train locally without revealing actual information. However, FL faces challenges from the selfish and malicious behavior of MUs, potentially harming the global model’s performance. To tackle the challenges, we propose a rational, reliable FL framework (RRFL) for MCS. Firstly, utilizing Euclidean distance and tracking malicious behavior frequency, we calculate risk scores for MUs and eliminate outlier updates. Secondly, we design a long-term, fair incentive mechanism, evaluating MUs’ comprehensive reputation based on risk scores from their historical sensing tasks. Rewards are allocated exclusively to consistently outstanding MUs, encouraging honest cooperation in MCS. Finally, we construct an extensive game with imperfect information, deriving the sequential equilibrium to validate the scheme’s reasonableness. Experimental verification on the MNIST dataset demonstrates the effectiveness and reliability of RRFL, with results indicating strong accuracy and overall cost performance. MCS participants achieve the desired maximum utility, with over a 50% reduction in detection costs compared to short-term FL incentive mechanisms in MCS.

Keywords