IET Computer Vision (Feb 2022)

DPANet: Dual Pooling‐aggregated Attention Network for fish segmentation

  • Wenbo Zhang,
  • Chaoyi Wu,
  • Zhenshan Bao

DOI
https://doi.org/10.1049/cvi2.12065
Journal volume & issue
Vol. 16, no. 1
pp. 67 – 82

Abstract

Read online

Abstract The sustainable development of marine fisheries depends on the accurate measurement of data on fish stocks. Semantic segmentation methods based on deep learning can be applied to automatically obtain segmentation masks of fish in images to obtain measurement data. However, general semantic segmentation methods cannot accurately segment fish objects in underwater images. In this study, a Dual Pooling‐aggregated Attention Network (DPANet) to adaptively capture long‐range dependencies through an efficient and computing‐friendly manner to enhance feature representation and improve segmentation performance is proposed. Specifically, a novel pooling‐aggregate position attention module and a pooling‐aggregate channel attention module are designed to aggregate contexts in the spatial dimension and channel dimension, respectively. These two modules adopt pooling operations along the channel dimension and along the spatial dimension to aggregate information, respectively, thus reducing computational costs. In these modules, attention maps are generated by four different paths and are aggregated into one. The authors conduct extensive experiments to validate the effectiveness of the DPANet and achieve new state‐of‐the‐art segmentation performance on the well‐known fish image dataset DeepFish as well as on the underwater image dataset SUIM, achieving a Mean IoU score of 91.08% and 85.39% respectively, while significantly reducing FLOPs of attention modules by about 93%.

Keywords