Entropy (Jul 2022)

FASSVid: Fast and Accurate Semantic Segmentation for Video Sequences

  • Jose Portillo-Portillo,
  • Gabriel Sanchez-Perez,
  • Linda K. Toscano-Medina,
  • Aldo Hernandez-Suarez,
  • Jesus Olivares-Mercado,
  • Hector Perez-Meana,
  • Pablo Velarde-Alvarado,
  • Ana Lucila Sandoval Orozco,
  • Luis Javier García Villalba

DOI
https://doi.org/10.3390/e24070942
Journal volume & issue
Vol. 24, no. 7
p. 942

Abstract

Read online

Most of the methods for real-time semantic segmentation do not take into account temporal information when working with video sequences. This is counter-intuitive in real-world scenarios where the main application of such methods is, precisely, being able to process frame sequences as quickly and accurately as possible. In this paper, we address this problem by exploiting the temporal information provided by previous frames of the video stream. Our method leverages a previous input frame as well as the previous output of the network to enhance the prediction accuracy of the current input frame. We develop a module that obtains feature maps rich in change information. Additionally, we incorporate the previous output of the network into all the decoder stages as a way of increasing the attention given to relevant features. Finally, to properly train and evaluate our methods, we introduce CityscapesVid, a dataset specifically designed to benchmark semantic video segmentation networks. Our proposed network, entitled FASSVid improves the mIoU accuracy performance over a standard non-sequential baseline model. Moreover, FASSVid obtains state-of-the-art inference speed and competitive mIoU results compared to other state-of-the-art lightweight networks, with significantly lower number of computations. Specifically, we obtain 71% of mIoU in our CityscapesVid dataset, running at 114.9 FPS on a single NVIDIA GTX 1080Ti and 31 FPS on the NVIDIA Jetson Nano embedded board with images of size 1024×2048 and 512×1024, respectively.

Keywords