IEEE Access (Jan 2025)
Seeking Stability for Multi-Leader Stackelberg Game as an Incentive Mechanism for Multi-Requester Federated Learning
Abstract
Federated learning (FL) is an emerging distributed-learning technology that allows a requester to hire a set of independent FL workers for a model training task. The fact that FL workers are not obliged to do the training task calls for an incentive mechanism to motivate workers. However, existing incentive mechanisms for FL mainly were designed for single requester. A few studies considered multiple requesters but constrained a worker’s potential to train multiple models simultaneously. In this paper, we remove the constraint to investigate the interplays among requesters in addition to inherent interplays among workers and between requesters and workers. Two crucial issues in this setting are whether the interplays lead to a stable (i.e., attainable) outcome and how to find it. We model the problem as seeking an equilibrium in a multi-leader multi-follower Stackelberg game and propose an exact method based on the derivation and backward induction to identify the Stackelberg equilibrium. We show how to adapt the proposed method to scenarios with capacity/budget constraints and privacy concerns. We have done a series of analyses to study the interplays of workers and requesters in Stackelberg equilibria. The results confirm the first-mover advantage of the requesters and reveal the possibility of mutual benefits between the requesters and workers.
Keywords