IEEE Access (Jan 2020)

Efficient Video Fire Detection Exploiting Motion-Flicker-Based Dynamic Features and Deep Static Features

  • Yakun Xie,
  • Jun Zhu,
  • Yungang Cao,
  • Yunhao Zhang,
  • Dejun Feng,
  • Yuchun Zhang,
  • Min Chen

DOI
https://doi.org/10.1109/ACCESS.2020.2991338
Journal volume & issue
Vol. 8
pp. 81904 – 81917

Abstract

Read online

Since fire is one of the most serious types of accidents that can occur, there is always a need for improvement in fire detection capabilities. Convolutional neural networks (CNNs) have been used for a variety of high-performance computer vision tasks. The use of CNNs to extract deep static features of fire has greatly improved the accuracy of fire detection. However, the implementation of CNNs in the real world is limited by their high computational cost. In addition, fire detection methods based on the classification of images alone using CNNs cannot account for the dynamic features of fire. Therefore, in this paper, a method that exploits both motion-flicker-based dynamic features and deep static features is proposed for video fire detection. First, dynamic features are extracted by analyzing the differences in motion and flicker features between fire and other objects in videos. Second, an adaptive lightweight convolutional neural network (AL-CNN) is proposed to extract the deep static features of fire. Finally, the dynamic and static features of fire are combined to establish a video fire detection method with improved operational efficiency in terms of accuracy and run time. To prove the validity of our method, its accuracy and run time are evaluated on three test datasets, and the results reveal that our method exhibits better performance than state-of-the-art methods. Moreover, our method is shown to be feasible in complex video scenarios and for devices with resource constraints.

Keywords