IEEE Access (Jan 2020)

Visual-Inertial Fusion Based Positioning Systems

  • Jianan Zhang,
  • Tim Kane

DOI
https://doi.org/10.1109/ACCESS.2020.3032013
Journal volume & issue
Vol. 8
pp. 189761 – 189774

Abstract

Read online

In this paper, we developed a visible light positioning (VLP) system using a camera and low-cost inertial measurement units (IMUs). Applying computer vision and sensor fusion techniques, our VLP system is able to estimate the angle of arrival (AoA) and the distance from a landmark to a mobile device. Due to the complementary nature between IMUs and cameras, we are able to improve the performance of VLP systems by applying sensor fusion. Currently, most optical positioning systems require at least two line-of-sight (LOS) links, so the coverage is not always satisfactory. Using a single round light-emitting diode (LED) panel or two coplanar black thick rings as the landmark, our VLP system only needs one LOS link to estimate the orientation and position of the mobile device. By activating inertial navigation, our VLP system is able to perform localization even if the landmark is temporarily blocked by obstacles. We derived approximated upper bounds of the angular errors and applied visual-inertial sensor fusion in estimating the Euler angles of the mobile device. Since the weights of sensor fusion are determined by upper bounds, the expected maximum errors are minimized in our positioning system. In our field experiments, the positioning system has an average positioning error of 0.18m with an effective positioning range of 7m. Compared to similar types of positioning systems, our system has significant improvements in positioning range without sacrificing positioning accuracy.

Keywords