IET Image Processing (Feb 2021)
An unsupervised approach for traffic motion patterns extraction
Abstract
Abstract Automatic analysis, understanding typical activities, and identifying vehicle behaviour in crowded traffic scenes are fundamental and challenging tasks for traffic video surveillance. Some recent researches have been using machine learning approaches to extract meaningful patterns occurring in a traffic scene, for example, intersection. In this regard, we convert visual patterns and features to visual words using dense and sparse optical flow and learning traffic motion patterns with group sparse topical coding (GSTC) algorithm. In the first step of the proposed algorithm, the input traffic video is divided into non‐overlapping clips. After that, motion vectors are extracted using dual TV‐L1 as a dense optical flow and Lucas–Kanade as a sparse optical flow and converted to flow words. For learning traffic motion patterns, the GSTC algorithm, that is, a non‐probabilistic topic model (TM) has been applied. These patterns represent priors on observable motion, which can be utilised to describe a scene and answer behaviour questions such as what are the motion patterns in a traffic scene and what is going on. The experimental results which have been obtained using a real dataset, QUML, show that the combination of the GSTC + dual TV‐L1 extracts more traffic motion patterns in comparison with the GSTC + Lucas–Kanade and previous studies.
Keywords