Science of Remote Sensing (Dec 2024)

Using computer vision to classify, locate and segment fire behavior in UAS-captured images

  • Brett L. Lawrence,
  • Emerson de Lemmus

Journal volume & issue
Vol. 10
p. 100167

Abstract

Read online

The widely adaptable capabilities of artificial intelligence, in particular deep learning and computer vision have led to significant research output regarding flame and smoke detection. The composition of flame and smoke, also described as fire behavior, can be considerably different depending on factors like weather, fuels, and the specific landscape fire is being observed on. The ability to detect definable classes of fire behavior using computer vision has not been explored and could be helpful given it often dictates how firefighters respond to fire situations. To test whether types of fire behavior could be reliably classified, we collected and labeled a unique unmanned aerial system (UAS) image dataset of fire behavior classifications to be trained and validated using You Only Look Once (YOLO) detection models. Our 960 labeled images were sourced from over 21 h of UAS video collected during prescribed fire operations covering a large region of Texas and Louisiana, United States. National Wildfire Coordinating Group (NWCG) fire behavior observations and descriptions served as a reference for determining fire behavior classes during labeling. YOLOv8 models were trained on NWCG Rank 1–3 fire behavior descriptions in grassland, shrubland, forested, and combined fire regimes within our study area. Models were first trained and validated on classifying isolated image objects of fire behavior, and then separately trained to locate and segment fire behavior classifications in UAS images. Models trained to classify isolated image objects of fire behavior consistently performed at a mAP of 0.808 or higher, with combined fire regimes producing the best results (mAP = 0.897). Most segmentation models performed relatively poorly, except for the forest regime model at a box (locate) and mask (segment) mAP of 0.59 and 0.611, respectively. Our results indicate that classifying fire behavior with computer vision is possible in different fire regimes and fuel models, whereas locating and segmenting fire behavior types around background information is relatively difficult. However, it may be a manageable task with enough data, and when models are developed for a specific fire regime. With an increasing number of destructive wildfires and new challenges confronting fire managers, identifying how new technologies can quickly assess wildfire situations can assist wildfire responder awareness. Our conclusion is that levels of abstraction deeper than just detection of smoke or flame are possible using computer vision and could make even more detailed aerial fire monitoring possible using a UAS.

Keywords