IEEE Access (Jan 2024)

D2NAS: Efficient Neural Architecture Search With Performance Improvement and Model Size Reduction for Diverse Tasks

  • Jungeun Lee,
  • Seungyub Han,
  • Jungwoo Lee

DOI
https://doi.org/10.1109/ACCESS.2024.3434743
Journal volume & issue
Vol. 12
pp. 127074 – 127085

Abstract

Read online

Neural Architecture Search (NAS) has proven valuable in many applications such as computer vision. However, its true potential is unveiled when applied to less-explored domains. In this work, we study NAS for addressing problems in diverse fields where we expect to apply deep neural networks in the real world domains. We introduce D2NAS, Differential and Diverse NAS, leveraging techniques such as Differentiable ARchiTecture Search (DARTS) and Diverse-task Architecture SearcH (DASH) for architecture discovery. Our approach outperforms existing models, including Wide ResNet (WRN) and DASH, when evaluated on NAS-Bench-360 tasks, which include 10 numbers of diverse tasks for 1D, 2D, 2D Dense (tightly interconnected data) tasks. Compared to DASH, D2NAS reduces average error rates by 12.2%, while achieving an 85.1% reduction in average parameters (up to 97.3%) and a 91.3% reduction in Floating Point Operations (FLOPs, up to 99.3%). Therefore, D2NAS enables the creation of lightweight architectures that exhibit superior performance across various tasks, extending its applicability beyond computer vision to include mobile applications.

Keywords