Electronics Letters (Dec 2023)

Two‐stage architectural fine‐tuning for neural architecture search in efficient transfer learning

  • Soohyun Park,
  • Seok Bin Son,
  • Youn Kyu Lee,
  • Soyi Jung,
  • Joongheon Kim

DOI
https://doi.org/10.1049/ell2.13066
Journal volume & issue
Vol. 59, no. 24
pp. n/a – n/a

Abstract

Read online

Abstract In many deep neural network (DNN) applications, the difficulty of gathering high‐quality data in industry fields hinders the practical use of DNN. Thus, the concept of transfer learning (TL) has emerged, which leverages the pretrained knowledge of the DNN which was built based on large‐scale datasets. For this TL objective, this paper suggests two‐stage architectural fine‐tuning for reducing the costs and time while exploring the most efficient DNN model, inspired by neural architecture search (NAS). The first stage is mutation, which reduces the search costs using a priori architectural information. Moreover, the next stage is early‐stopping, which reduces NAS costs by terminating the search process in the middle of computation. The data‐intensive experimental results verify that the proposed method outperforms benchmarks.

Keywords