IEEE Access (Jan 2024)

Real-Time Human Tracking Using Multi-Features Visual With CNN-LSTM and Q-Learning

  • Devira Anggi Maharani,
  • Carmadi Machbub,
  • Pranoto Hidaya Rusmin,
  • Lenni Yulianti

DOI
https://doi.org/10.1109/ACCESS.2024.3355785
Journal volume & issue
Vol. 12
pp. 13233 – 13247

Abstract

Read online

Various methods are employed in computer vision applications to identify individuals, including using face recognition as a human visual feature helpful in tracking or searching for a person. However, tracking systems that rely solely on facial information encounter limitations, particularly when faced with occlusions, blurred images, or faces oriented away from the camera. Under these conditions, the system struggles to achieve accurate tracking-based face recognition. Therefore, this research addresses this issue by fusing descriptions of the face visual with body visual features. When the system cannot find the target face, the CNN+LSTM hybrid method assists in multi-feature body visual recognition, narrowing the search space and speeding up the search process. The results indicate that the combination of the CNN+LSTM method yields higher accuracy, recall, precision, and F1 scores (reaching 89.20%, 87.36%, 91.02%, and 88.43%, respectively) compared to the single CNN method (reaching 88.84%, 74.00%, 67.00%, and 69.00% respectively). However, the combination of these two visual features requires high computation. Thus, it is necessary to add a tracking system to reduce the computational load and predict the location. Furthermore, this research utilizes the Q-Learning algorithm to make optimal decisions in automatically tracking objects in dynamic environments. The system considers factors such as face and body visual features, object location, and environmental conditions to make the best decisions, aiming to enhance tracking efficiency and accuracy. Based on the conducted experiments, it is concluded that the system can adjust its actions in response to environmental changes with better outcomes. It achieves an accuracy rate of 91.5% and an average of 50 fps in five different videos, as well as a video benchmark dataset with an accuracy of 84% and an average error of 11.15 pixels. Utilizing the proposed method speeds up the search process and optimizes tracking decisions, saving time and computational resources.

Keywords