IET Image Processing (Dec 2024)

Object detection in smart indoor shopping using an enhanced YOLOv8n algorithm

  • Yawen Zhao,
  • Defu Yang,
  • Sheng Cao,
  • Bingyu Cai,
  • Maryamah Maryamah,
  • Mahmud Iwan Solihin

DOI
https://doi.org/10.1049/ipr2.13284
Journal volume & issue
Vol. 18, no. 14
pp. 4745 – 4759

Abstract

Read online

Abstract This paper introduces an enhanced object detection algorithm tailored for indoor shopping applications, a critical component of smart cities and smart shopping ecosystems. The proposed method builds on the YOLOv8n algorithm by integrating a ParNetAttention module into the backbone's C2f module, creating the novel C2f‐ParNet structure. This innovation enhances feature extraction, crucial for detecting intricate details in complex indoor environments. Additionally, the channel‐wise attention‐recurrent feature extraction (CARAFE) module is incorporated into the neck network, improving target feature fusion and focus on objects of interest, thereby boosting detection accuracy. To optimize training efficiency, the model employs the Wise Intersection over Union (WIoUv3) as its regression loss function, accelerating data convergence and improving performance. Experimental results demonstrate the enhanced YOLOv8n achieves a mean average precision (mAP) at 50% threshold (mAP@50) of 61.2%, a 1.2 percentage point improvement over the baseline. The fully optimized algorithm achieves an mAP@50 of 65.9% and an F1 score of 63.5%, outperforming both the original YOLOv8n and existing algorithms. Furthermore, with a frame rate of 106.5 FPS and computational complexity of just 12.9 GFLOPs (Giga Floating‐Point Operations per Second), this approach balances high performance with lightweight efficiency, making it ideal for real‐time applications in smart retail environments.

Keywords