IEEE Access (Jan 2023)

Online Decentralized Multi-Agents Meta-Learning With Byzantine Resiliency

  • Olusola T. Odeyomi,
  • Bassey Ude,
  • Kaushik Roy

DOI
https://doi.org/10.1109/ACCESS.2023.3291677
Journal volume & issue
Vol. 11
pp. 68286 – 68300

Abstract

Read online

Meta-learning is a learning-to-learn paradigm that leverages past learning experiences for quick adaptation to new learning tasks. It has a wide application, such as in few-shot learning, reinforcement learning, neural architecture search, federated learning, etc. It has been extended to the online learning setting where task data distribution arrives sequentially. This provides continuous lifelong learning. However, in the online meta-learning setting, a single agent has to learn many varieties of related tasks. Yet, a single agent is limited to its local task data and must collaborate with neighboring agents to improve its learning performance. Therefore, online decentralized meta-learning algorithms are designed to allow an agent to collaborate with neighboring agents in order to improve learning performance. Despite their advantages, online decentralized meta-learning algorithms are susceptible to Byzantine attacks caused by the diffusion of poisonous information from unidentifiable Byzantine agents in the network. This is a serious problem where normal agents are unable to learn and convergence to the global meta-initializer is thwarted. State-of-the-art algorithms, such as BRIDGE, designed to provide robustness against Byzantine attacks are slow and cannot work in online learning settings. Therefore, we propose an online decentralized meta-learning algorithm that works with two Byzantine-resilient aggregation techniques, which are modified coordinate-wise screening and centerpoint aggregation. The proposed algorithm provides faster convergence speed and guarantees both resiliency and continuous lifelong learning. Our simulation results show that the proposed algorithm performs better than state-of-the-art algorithms.

Keywords