Opto-Electronic Advances (Sep 2020)

Visual tracking based on transfer learning of deep salience information

  • Zuo Haorui,
  • Xu Zhiyong,
  • Zhang Jianlin,
  • Jia Ge

DOI
https://doi.org/10.29026/oea.2020.190018
Journal volume & issue
Vol. 3, no. 9
pp. 190018-1 – 190018-11

Abstract

Read online

In this paper, we propose a new visual tracking method in light of salience information and deep learning. Salience detection is used to exploit features with salient information of the image. Complicated representations of image features can be gained by the function of every layer in convolution neural network (CNN). The characteristic of biology vision in attention-based salience is similar to the neuroscience features of convolution neural network. This motivates us to improve the representation ability of CNN with functions of salience detection. We adopt the fully-convolution networks (FCNs) to perform salience detection. We take parts of the network structure to perform salience extraction, which promotes the classification ability of the model. The network we propose shows great performance in tracking with the salient information. Compared with other excellent algorithms, our algorithm can track the target better in the open tracking datasets. We realize the 0.5592 accuracy on visual object tracking 2015 (VOT15) dataset. For unmanned aerial vehicle 123 (UAV123) dataset, the precision and success rate of our tracker is 0.710 and 0.429.

Keywords