Applied Sciences (Mar 2023)

EffShuffNet: An Efficient Neural Architecture for Adopting a Multi-Model

  • Jong-In Kim,
  • Gwang-Hyun Yu,
  • Jin Lee,
  • Dang Thanh Vu,
  • Jung-Hyun Kim,
  • Hyun-Sun Park,
  • Jin-Young Kim,
  • Sung-Hoon Hong

DOI
https://doi.org/10.3390/app13063505
Journal volume & issue
Vol. 13, no. 6
p. 3505

Abstract

Read online

This work discusses the challenges of multi-label image classification and presents a novel Efficient Shuffle Net (EffShuffNet) based on a convolutional neural network (CNN) architecture to address these challenges. Multi-label classification is difficult as the complexity of prediction increases with the number of labels and classes, and current multi-model approaches require optimized deep learning models which increase computational costs. The EffShuff block divides the input feature map into two parts and processes them differently, with one half undergoing a lightweight convolution and the other half undergoing average pooling. The EffShuff transition component shuffles the feature maps after lightweight convolution, resulting in a 57.9% reduction in computational cost compared to ShuffleNetv2. Furthermore, we propose EffShuff-Dense architecture, which incorporates Dense connection to further emphasize low-level features. In experiments, the EffShuffNet achieved 96.975% accuracy in age and gender classification, which is 5.83% higher than the state-of-the-art, while EffShuffDenseNet was even better with 97.63% accuracy. Additionally, the proposed models were found to have better classification performance with smaller model sizes in fine-grained image classification experiments.

Keywords