Mathematical Biosciences and Engineering (Nov 2020)

DeepFireNet: A real-time video fire detection method based on multi-feature fusion

  • Bin Zhang,
  • Linkun Sun,
  • Yingjie Song,
  • Weiping Shao,
  • Yan Guo,
  • Fang Yuan

DOI
https://doi.org/10.3934/mbe.2020397
Journal volume & issue
Vol. 17, no. 6
pp. 7804 – 7818

Abstract

Read online

This paper proposes a real-time fire detection framework DeepFireNet that combines fire features and convolutional neural networks, which can be used to detect real-time video collected by monitoring equipment. DeepFireNet takes surveillance device video stream as input. To begin with, based on the static and dynamic characteristics of fire, a large number of non-fire images in the video stream are filtered. In the process, for the fire images in the video stream, the suspected fire area in the image is extracted. Eliminate the influence of light sources, candles and other interference sources to reduce the interference of complex environments on fire detection. Then, the algorithm encodes the extracted region and inputs it into DeepFireNet convolution network, which extracts the depth feature of the image and finally judges whether there is a fire in the image. DeepFireNet network replaces 5×5 convolution kernels in the inception layer with two 3×3 convolution kernels, and only uses three improved inception layers as the core architecture of the network, which effectively reduces the network parameters and significantly reduces the amount of computation. The experimental results show that this method can be applied to many different indoor and outdoor scenes. Besides, the algorithm effectively meets the requirements for the accuracy and real-time of the detection algorithm in the process of real-time video detection. This method has good practicability.

Keywords