Proceedings of the XXth Conference of Open Innovations Association FRUCT (Apr 2022)

Transformer-Based Deep Monocular Visual Odometry for Edge Devices

  • Anton Klochkov,
  • Ivan Drokin

DOI
https://doi.org/10.5281/zenodo.6519935
Journal volume & issue
Vol. 31, no. 2
pp. 422 – 428

Abstract

Read online

A lot of recent works have shown that deep learning-based visual odometry methods outperform existing feature-based approaches in the monocular case. However, most of them cannot be used in mobile robotics because they require a sufficiently powerful computing device. In this paper, we propose a method with a significant reduction in computing resources and with a slight decrease in accuracy. To achieve that we replaced recurrent block by lightweight transformer-based module. We evaluate the proposed model on the KITTI dataset for calculating accuracy and test a computation cost on NVIDIA Jetson Nano and NVIDIA Jetson AGX Xavier. Our experiments show that the proposed model works faster than the considered original model.

Keywords