Scientific Reports (Sep 2024)

UICE-MIRNet guided image enhancement for underwater object detection

  • Pratima Sarkar,
  • Sourav De,
  • Sandeep Gurung,
  • Prasenjit Dey

DOI
https://doi.org/10.1038/s41598-024-73243-9
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 18

Abstract

Read online

Abstract Underwater object detection is a crucial aspect of monitoring the aquaculture resources to preserve the marine ecosystem. In most cases, Low-light and scattered lighting conditions create challenges for computer vision-based underwater object detection. To address these issues, low-colorfulness and low-light image enhancement techniques are explored. This work proposes an underwater image enhancement technique called Underwater Image Colorfulness Enhancement MIRNet (UICE-MIRNet) to increase the visibility of small, multiple, dense objects followed by underwater object detection using YOLOv4. UICE-MIRNet is a specialized version of classical MIRNet, which handles random increments of brightness features to address the visibility problem. The proposed UICE-MIRNET restrict brightness and also works on the improvement of the colourfulness of underwater images. UICE-MIRNet consists of an Underwater Image-Colorfulness Enhancement Block (UI-CEB). This block enables the extraction of low-colourful areas from underwater images and performs colour correction without affecting contextual information. The primary characteristics of UICE-MIRNet are the extraction of multiple features using a convolutional stream, feature fusion to facilitate the flow of information, preservation of contextual information by discarding irrelevant features and increasing colourfulness through proper feature selection. Enhanced images are then trained using the YOLOv4 object detection model. The performance of the proposed UICE-MIRNet method is quantitatively evaluated using standard metrics such as UIQM, UCIQE, entropy, and PSNR. The proposed work is compared with many existing image enhancement and restoration techniques. Also, the performance of object detection is assessed using precision, recall, and mAP. Extensive experiments are conducted on two standard datasets, Brackish and Trash-ICRA19, to demonstrate the performance of the proposed work compared to existing methods. The results show that the proposed model outperforms many state-of-the-art techniques.

Keywords