IEEE Access (Jan 2022)

“Ghost” and Attention in Binary Neural Network

  • Ruimin Sun,
  • Wanbing Zou,
  • Yi Zhan

DOI
https://doi.org/10.1109/ACCESS.2022.3181192
Journal volume & issue
Vol. 10
pp. 60550 – 60557

Abstract

Read online

As the memory footprint requirement and computational scale concerned, the light-weighted Binary Neural Networks (BNNs) have great advantages in limited-resources platforms, such as AIoT (Artificial Intelligence in Internet of Things) edge terminals, wearable and portable devices, etc. However, the binarization process naturally brings considerable information losses and further deteriorates the accuracy. In this article, three aspects are introduced to better the binarized ReActNet accuracy performance with a more low-complex computation. Firstly, an improved Binarized Ghost Module (BGM) for the ReActNet is proposed to increase the feature maps information. At the same time, the computational scale of this structure is still kept at a very low level. Secondly, we propose a new Label-aware Loss Function (LLF) in the penultimate layer as a supervisor which takes the label information into consideration. This auxiliary loss function makes each category’s feature vectors more separate, and improve the final fully-connected layer’s classification accuracy accordingly. Thirdly, the Normalization-based Attention Module (NAM) method is adopted to regulate the activation flow. The module helps to avoid the gradient saturation problem. With these three approaches, our improved binarized network outperforms the other state-of-the-art methods. It can achieve 71.4% Top-1 accuracy on the ImageNet and 86.45% accuracy on the CIFAR-10 respectively. Meanwhile, its computational scale OPs is the least $0.86\times {10}^{8}$ compared with the other mainstream BNN models. The experimental results prove the effectiveness of our proposals, and the study is very helpful and promising for the future low-power hardware implementations.

Keywords