The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Jun 2021)

MBS-NET: A MOVING-CAMERA BACKGROUND SUBTRACTION NETWORK FOR AUTONOMOUS DRIVING

  • J. Wei,
  • J. Jiang,
  • A. Yilmaz

DOI
https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-69-2021
Journal volume & issue
Vol. XLIII-B2-2021
pp. 69 – 76

Abstract

Read online

Background subtraction aims at detecting salient background which in return provides regions of moving objects referred to as the foreground. Background subtraction inherently uses the temporal relations by including time dimension in its formulation. Alternative techniques to background subtraction require stationary cameras for learning the background. Stationary cameras provide semi-constant background images that make learning salient background easier. Still cameras, however, are not applicable to moving camera scenarios, such as vehicle embedded camera for autonomous driving. For moving cameras, due to the complexity of modelling changing background, recent approaches focus on directly detecting the foreground objects in each frame independently. This treatment, however, requires learning all possible objects that can appear in the field of view. In this paper, we achieve background subtraction for moving cameras using specialized deep learning approach, the Moving-camera Background Subtraction Network (MBS-Net). Our approach is robust to detect changing background in various scenarios and does not require training on foreground objects. The developed approach uses temporal cues from past frames by applying Conditional Random Fields as a part of the developed neural network. Our proposed method have a good performance on ApolloScape dataset (Huang et al., 2018) with resolution 3384 × 2710 videos. To the best of our acknowledge, this paper is the first to propose background subtraction for moving cameras using deep learning.