IET Computer Vision (Dec 2016)

Surveillance video synopsis generation method via keeping important relationship among objects

  • Yumin Tian,
  • Haihong Zheng,
  • Qichao Chen,
  • Dan Wang,
  • Risan Lin

DOI
https://doi.org/10.1049/iet-cvi.2016.0128
Journal volume & issue
Vol. 10, no. 8
pp. 868 – 872

Abstract

Read online

To reduce the efforts of human in browsing long surveillance videos, synopsis videos are proposed. Traditional synopsis video generation methods condense most of the activities in the video by simultaneously showing several actions, even when they originally occurred at different times. This inevitably causes ignorance of temporal relationship among objects. For example, two persons walk shoulder to shoulder and they are detected and tracked separately, but in the synopsis they never ‘met’. In this study, a trajectory mapping model is defined, whose energy function includes not only the cost caused by the synopsis video, but that of the original video. In this way, it tries to make the relationship between objects of the original video consistent with that of the synopsis. Finally, the video synopsis is generated by an energy minimisation method. Experiments show that the proposed video synopsis can reduce the spatiotemporal redundancies of the input video as much as possible. Moreover, it can keep the important relationship between objects and maintain the time consistency of important activities.

Keywords