Sensors (Aug 2024)

Evaluating Federated Learning Simulators: A Comparative Analysis of Horizontal and Vertical Approaches

  • Ismail M. Elshair,
  • Tariq Jamil Saifullah Khanzada,
  • Muhammad Farrukh Shahid,
  • Shahbaz Siddiqui

DOI
https://doi.org/10.3390/s24165149
Journal volume & issue
Vol. 24, no. 16
p. 5149

Abstract

Read online

Federated learning (FL) is a decentralized machine learning approach whereby each device is allowed to train local models, eliminating the requirement for centralized data collecting and ensuring data privacy. Unlike typical typical centralized machine learning, collaborative model training in FL involves aggregating updates from various devices without sending raw data. This ensures data privacy and security while collecting a collective learning from distributed data sources. These devices in FL models exhibit high efficacy in terms of privacy protection, scalability, and robustness, which is contingent upon the success of communication and collaboration. This paper explore the various topologies of both decentralized or centralized in the context of FL. In this respect, we investigated and explored in detail the evaluation of four widly used end-to-end FL frameworks: FedML, Flower, Flute, and PySyft. We specifically focused on vertical and horizontal FL systems using a logistic regression model that aggregated by the FedAvg algorithm. specifically, we conducted experiments on two images datasets, MNIST and Fashion-MNIST, to evaluate their efficiency and performance. Our paper provides initial findings on how to effectively combine horizontal and vertical solutions to address common difficulties, such as managing model synchronization and communication overhead. Our research indicates the trade-offs that exist in the performance of several simulation frameworks for federated learning.

Keywords