IEEE Access (Jan 2023)

Joint Appearance and Motion Model With Temporal Transformer for Multiple Object Tracking

  • Hyunseop Kim,
  • Hyo-Jun Lee,
  • Hanul Kim,
  • Seong-Gyun Jeong,
  • Yeong Jun Koh

DOI
https://doi.org/10.1109/ACCESS.2023.3333366
Journal volume & issue
Vol. 11
pp. 133792 – 133803

Abstract

Read online

The problem of multi-object tracking (MOT) in the real world poses several challenging tasks, such as similar appearance, occlusion, and extreme articulation motion. In this paper, we propose a novel joint appearance and motion model, which is robust to diverse motion and objects with similar uniform appearance. The proposed MOT method includes a temporal transformer, a motion estimation module and a ReID embedding module. The temporal transformer is designed to convey object-aware features to the ReID embedding and motion estimation modules. The ReID embedding module extracts ReID features of the detected objects, while motion estimation module predicts expected locations of the previously tracked objects in the current frame. Also, we present a motion-guided association to fuse outputs of the appearance and motion modules effectively. Experimental results demonstrate that the proposed MOT method outperforms the state-of-the-arts on the TAO and DanceTrack datasets that have objects with diverse motions and similar appearances. Furthermore, the proposed MOT provides stable performance on MOT17 and MOT20 that contain objects with simple and regular motion patterns.

Keywords