IEEE Access (Jan 2024)
Split Consensus Federated Learning: An Approach for Distributed Training and Inference
Abstract
Distributed Machine Learning (D-ML), such as Federated Learning (FL) and Split Learning (SL), aims at resolving the limitations of Centralized Machine Learning (C-ML) by enhancing scalability and efficiency. D-ML relies on the client-Parameter Server (PS) paradigm, in which clients collaboratively train ML models while keeping their data locally, reducing the need for central data storage and preserving data privacy. In this paper, we propose a new fully-distributed method, named Split Consensus Federated Learning (SCFL), which combines the characteristics of FL and SL into a network of clients that cooperate in learning a shared model. Inspired by the iterative approach of Message Passing Neural Networks (MPNN), the proposed SCFL framework allows to decentralize the training and inference tasks of the neural networks at the clients, preserving the privacy of locally stored data. The proposed SCFL framework removes the need for a coordinating central entity, i.e., the PS, resulting into a fully-decentralized solution where both the training and inference procedures are distributed over the clients. We present three different strategies for SCFL implementation and we validate them in a cooperative positioning use case where clients use D-ML for network localization. Results show that the proposed SCFL method is able to combine the computational power (and data) of all clients to train local models which closely approximate the global C-ML solution at convergence.
Keywords