IEEE Access (Jan 2023)

Bayesian Optimization-Driven Adversarial Poisoning Attacks Against Distributed Learning

  • Marios Aristodemou,
  • Xiaolan Liu,
  • Sangarapillai Lambotharan,
  • Basil AsSadhan

DOI
https://doi.org/10.1109/ACCESS.2023.3304541
Journal volume & issue
Vol. 11
pp. 86214 – 86226

Abstract

Read online

Metaverse is envisioned to be the next-generation human-centric Internet which can offer an immersive experience for users with a broad application in healthcare, education, entertainment, and industries. These applications require the analysis of massive data that contains private and sensitive information. A potential solution to preserving privacy is deploying distributed learning frameworks, including federated learning (FL) and split learning (SL), due to their ability to address privacy leakage and analyze personalised data without sharing raw data. However, it is known that FL and SL are still susceptible to adversarial poisoning attacks. In this paper, we analyse such critical issues for the privacy-preserving mechanism in Metaverse services. We develop a novel poisoning attack based on Bayesian optimisation to emulate the adversarial behaviour against FL (BO-FLPA) and SL (BO-SLPA) which is important for the development of effective defense algorithms in the future. Specifically, we develop a layer optimisation method using the intuition of black-box optimisation with assuming that there is a function between the prediction’s uncertainty and layer optimisation parameters. The result of this optimisation provides the optimal weight parameters for the hidden layer, such as the first or the second layer for FL, and the first layer for SL. Numerical results demonstrate that in both FL and SL, the poisoned hidden layers have the ability to increase the susceptibility of the model to adversarial attacks in terms of prediction with low confidence or having a larger deviation of the probability density function of the predictions.

Keywords