IEEE Access (Jan 2024)

FireDetXplainer: Decoding Wildfire Detection With Transparency and Explainable AI Insights

  • Syeda Fiza Rubab,
  • Arslan Abdul Ghaffar,
  • Gyu Sang Choi

DOI
https://doi.org/10.1109/ACCESS.2024.3383653
Journal volume & issue
Vol. 12
pp. 52378 – 52389

Abstract

Read online

Recent analyses by leading national wildfire and emergency monitoring agencies have highlighted an alarming trend: the impact of wildfire devastation has escalated to nearly three times that of a decade ago. To address this challenge, we propose FireDetXplainer (FDX), a robust deep-learning model that enhances the interpretability often lacking in current solutions. FDX employs an innovative approach, combining transfer learning and fine-tuning methodologies with the Learning without Forgetting (LwF) framework. A key aspect of our methodology is the utilization of the pre-trained MobileNetV3 model, renowned for its efficiency in image classification tasks. Through strategic adaptation and augmentation, we have achieved an exceptional classification accuracy of 99.91%. The model is further refined with convolutional blocks and advanced image pre-processing techniques, contributing to this high level of precision. Leveraging diverse datasets from Kaggle and Mendeley, FireDetXplainer incorporates Explainable AI (XAI) tools such as Gradient Weighted Class Activation Map (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME) for comprehensive result interpretation. Our extensive experimental results demonstrate that FireDetXplainer not only outperforms existing state-of-the-art models but does so with remarkable accuracy, making it a highly effective solution for interpretable image classification in wildfire management.

Keywords