智慧农业 (Jul 2024)

A Regional Farming Pig Counting System Based on Improved Instance Segmentation Algorithm

  • ZHANG Yanqi,
  • ZHOU Shuo,
  • ZHANG Ning,
  • CHAI Xiujuan,
  • SUN Tan

DOI
https://doi.org/10.12133/j.smartag.SA202310001
Journal volume & issue
Vol. 6, no. 4
pp. 53 – 63

Abstract

Read online

ObjectiveCurrently, pig farming facilities mainly rely on manual counting for tracking slaughtered and stored pigs. This is not only time-consuming and labor-intensive, but also prone to counting errors due to pig movement and potential cheating. As breeding operations expand, the periodic live asset inventories put significant strain on human, material and financial resources. Although methods based on electronic ear tags can assist in pig counting, these ear tags are easy to break and fall off in group housing environments. Most of the existing methods for counting pigs based on computer vision require capturing images from a top-down perspective, necessitating the installation of cameras above each hogpen or even the use of drones, resulting in high installation and maintenance costs. To address the above challenges faced in the group pig counting task, a high-efficiency and low-cost pig counting method was proposed based on improved instance segmentation algorithm and WeChat public platform.MethodsFirstly, a smartphone was used to collect pig image data in the area from a human view perspective, and each pig's outline in the image was annotated to establish a pig count dataset. The training set contains 606 images and the test set contains 65 images. Secondly, an efficient global attention module was proposed by improving convolutional block attention module (CBAM). The efficient global attention module first performed a dimension permutation operation on the input feature map to obtain the interaction between its channels and spatial dimensions. The permuted features were aggregated using global average pooling (GAP). One-dimensional convolution replaced the fully connected operation in CBAM, eliminating dimensionality reduction and significantly reducing the model's parameter number. This module was integrated into the YOLOv8 single-stage instance segmentation network to build the pig counting model YOLOv8x-Ours. By adding an efficient global attention module into each C2f layer of the YOLOv8 backbone network, the dimensional dependencies and feature information in the image could be extracted more effectively, thereby achieving high-accuracy pig counting. Lastly, with a focus on user experience and outreach, a pig counting WeChat mini program was developed based on the WeChat public platform and Django Web framework. The counting model was deployed to count pigs using images captured by smartphones.Results and DiscussionsCompared with existing methods of Mask R-CNN, YOLACT(Real-time Instance Segmentation), PolarMask, SOLO and YOLOv5x, the proposed pig counting model YOLOv8x-Ours exhibited superior performance in terms of accuracy and stability. Notably, YOLOv8x-Ours achieved the highest accuracy in counting, with errors of less than 2 and 3 pigs on the test set. Specifically, 93.8% of the total test images had counting errors of less than 3 pigs. Compared with the two-stage instance segmentation algorithm Mask R-CNN and the YOLOv8x model that applies the CBAM attention mechanism, YOLOv8x-Ours showed performance improvements of 7.6% and 3%, respectively. And due to the single-stage design and anchor-free architecture of the YOLOv8 model, the processing speed of a single image was only 64 ms, 1/8 of Mask R-CNN. By embedding the model into the WeChat mini program platform, pig counting was conducted using smartphone images. In cases where the model incorrectly detected pigs, users were given the option to click on the erroneous location in the result image to adjust the statistical outcomes, thereby enhancing the accuracy of pig counting.ConclusionsThe feasibility of deep learning technology in the task of pig counting was demonstrated. The proposed method eliminates the need for installing hardware equipment in the breeding area of the pig farm, enabling pig counting to be carried out effortlessly using just a smartphone. Users can promptly spot any errors in the counting results through image segmentation visualization and easily rectify any inaccuracies. This collaborative human-machine model not only reduces the need for extensive manpower but also guarantees the precision and user-friendliness of the counting outcomes.

Keywords