IEEE Access (Jan 2023)

L-GhostNet: Extract Better Quality Features

  • Jing Chi,
  • Shaohua Guo,
  • Haopeng Zhang,
  • Yu Shan

DOI
https://doi.org/10.1109/ACCESS.2023.3234108
Journal volume & issue
Vol. 11
pp. 2361 – 2374

Abstract

Read online

A lightweight image recognition model, L-GhostNet based on improved GhostNet, is proposed to address the problems of extensive computation and high storage cost of deep convolutional neural networks. The model incorporated learning group convolution and improved CA into GhostNet to reduce the calculation and number of parameters and improve the flexibility of the network. At the same time, the pruning ratio in the learning group convolution is increased to control the end time of pruning in the whole process; the improved CA uses a fully connected layer to replace the convolutional layer, which can make the connection between the two dimensions tighter and increase the flexibility of the model. Experiments on datasets in various fields, such as grape leaf recognition, gesture recognition, face recognition, rice recognition, and CIFAR-10, show that L-GhostNet has slightly improved accuracy, reduced computation by more than 44%, decreased the number of parameters by more than 33%, and improved FPS by 26% on all datasets compared to GhostNet. Compared with other commonly used lightweight network models, MobileNets and ShuffleNets, it has the best overall performance with the lowest FLOPs, highest accuracy, and fewer parameters on all datasets at the same level of FLOPs.

Keywords