IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (Jan 2024)

AF-Net: An Active Fire Detection Model Using Improved Object-Contextual Representations on Unbalanced UAV Datasets

  • Xikun Hu,
  • Wenlin Liu,
  • Hao Wen,
  • Ka-Veng Yuen,
  • Tian Jin,
  • Alberto Costa Nogueira Junior,
  • Ping Zhong

DOI
https://doi.org/10.1109/JSTARS.2024.3406767
Journal volume & issue
Vol. 17
pp. 13558 – 13569

Abstract

Read online

Active fire (AF) detection is essential for early warning of wildfires to help suppress and mitigate damage. This study presents an AF neural network (AF-Net) model based on object-contextual representations (OCR) for AF segmentation from very high-resolution (VHR) unmanned aerial vehicles (UAVs) remote sensing images. To efficiently detect heat anomalies in forests from large UAV scenes, we have to handle the class imbalance between small AF pixels and large-area complex background information. Class imbalance affects the model optimization and makes the training process stuck at a local minimum. Our work aims to address this issue by improving the object-contextual feature representations associated with fire in three ways. First, we employ a grid-based sampling strategy by constraining sampling ranges and reducing background samples. It improves the proportion of foreground pixels from 5.6% to 7.9% and maintains at least one AF pixel in each sample. Then, we simplify the OCR module to strengthen small object representations related to the AF using a self-attention unit. The OCR module receives multiscale pixel representations as input from the HRNet-W48 backbone. Lastly, the weighted binary cross-entropy loss and the Lovász hinge loss are combined to improve the detection accuracy by optimizing the foreground IoU. We evaluate the performance of the proposed AF-Net on one aerial AF benchmark (FLAME dataset). The proposed framework improves the mIoU score from 78.17% (baseline U-Net) to 91.14%.

Keywords