IEEE Access (Jan 2023)

ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning

  • Heechul Lim,
  • Kang-Wook Chon,
  • Min-Soo Kim

DOI
https://doi.org/10.1109/ACCESS.2023.3258975
Journal volume & issue
Vol. 11
pp. 34297 – 34308

Abstract

Read online

Despite recent advances in deep neural networks (DNNs), multi-task learning has not been able to utilize DNNs thoroughly. The current method of DNN design for a single task requires considerable skill in deciding many architecture parameters a priori before training begins. However, extending it to multi-task learning makes it more challenging. Inspired by findings from neuroscience, we propose a unified DNN modeling framework called ConnectomeNet that encompasses the best principles of contemporary DNN designs and unifies them with transfer, curriculum, and adaptive structural learning, all in the context of multi-task learning. Specifically, ConnectomeNet iteratively resembles connectome neuron units with a high-level topology represented as a general-directed acyclic graph. As a result, ConnectomeNet enables non-trivial automatic sharing of neurons across multiple tasks and learns to adapt its topology economically for a new task. Extensive experiments, including an ablation study, show that ConnectomeNet outperforms the state-of-the-art methods in multi-task learning such as the degree of catastrophic forgetting from sequential learning. For the degree of catastrophic forgetting, with normalized accuracy, our proposed method (which becomes 100%) overcomes mean-IMM (89.0%) and DEN (99.97%).

Keywords