Remote Sensing (Mar 2024)

An Up-View Visual-Based Indoor Positioning Method via Deep Learning

  • Chen Chen,
  • Yuwei Chen,
  • Jianliang Zhu,
  • Changhui Jiang,
  • Jianxin Jia,
  • Yuming Bo,
  • Xuanzhi Liu,
  • Haojie Dai,
  • Eetu Puttonen,
  • Juha Hyyppä

DOI
https://doi.org/10.3390/rs16061024
Journal volume & issue
Vol. 16, no. 6
p. 1024

Abstract

Read online

Indoor positioning plays a crucial role in various domains. It is employed in various applications, such as navigation, asset tracking, and location-based services (LBS), in Global Navigation Satellite System (GNSS) denied or degraded areas. The visual-based positioning technique is a promising solution for high-accuracy indoor positioning. However, most visual positioning research uses the side-view perspective, which is susceptible to interferences and may cause concerns about privacy and public security. Therefore, this paper innovatively proposes an up-view visual-based indoor positioning algorithm. It uses the up-view images to realize indoor positioning. Firstly, we utilize a well-trained YOLO V7 model to realize landmark detection and gross extraction. Then, we use edge detection operators to realize the precision landmark extraction, obtaining the landmark pixel size. The target position is calculated based on the landmark detection and extraction results and the pre-labeled landmark sequence via the Similar Triangle Principle. Additionally, we also propose an inertial navigation system (INS)-based landmark matching method to match the landmark within an up-view image with a landmark in the pre-labeled landmark sequence. This is necessary for kinematic indoor positioning. Finally, we conduct static and kinematic experiments to verify the feasibility and performance of the up-view-based indoor positioning method. The results demonstrate that the up-view visual-based positioning is prospective and worthy of research.

Keywords