Sensors (Aug 2024)

TransNeural: An Enhanced-Transformer-Based Performance Pre-Validation Model for Split Learning Tasks

  • Guangyi Liu,
  • Mancong Kang,
  • Yanhong Zhu,
  • Qingbi Zheng,
  • Maosheng Zhu,
  • Na Li

DOI
https://doi.org/10.3390/s24165148
Journal volume & issue
Vol. 24, no. 16
p. 5148

Abstract

Read online

While digital twin networks (DTNs) can potentially estimate network strategy performance in pre-validation environments, they are still in their infancy for split learning (SL) tasks, facing challenges like unknown non-i.i.d. data distributions, inaccurate channel states, and misreported resource availability across devices. To address these challenges, this paper proposes a TransNeural algorithm for DTN pre-validation environment to estimate SL latency and convergence. First, the TransNeural algorithm integrates transformers to efficiently model data similarities between different devices, considering different data distributions and device participate sequence greatly influence SL training convergence. Second, it leverages neural network to automatically establish the complex relationships between SL latency and convergence with data distributions, wireless and computing resources, dataset sizes, and training iterations. Deviations in user reports are also accounted for in the estimation process. Simulations show that the TransNeural algorithm improves latency estimation accuracy by 9.3% and convergence estimation accuracy by 22.4% compared to traditional equation-based methods.

Keywords