Drones (Dec 2022)

FRCNN-Based Reinforcement Learning for Real-Time Vehicle Detection, Tracking and Geolocation from UAS

  • Chandra Has Singh,
  • Vishal Mishra,
  • Kamal Jain,
  • Anoop Kumar Shukla

DOI
https://doi.org/10.3390/drones6120406
Journal volume & issue
Vol. 6, no. 12
p. 406

Abstract

Read online

In the last few years, uncrewed aerial systems (UASs) have been broadly employed for many applications including urban traffic monitoring. However, in the detection, tracking, and geolocation of moving vehicles using UAVs there are problems to be encountered such as low-accuracy sensors, complex scenes, small object sizes, and motion-induced noises. To address these problems, this study presents an intelligent, self-optimised, real-time framework for automated vehicle detection, tracking, and geolocation in UAV-acquired images which enlist detection, location, and tracking features to improve the final decision. The noise is initially reduced by applying the proposed adaptive filtering, which makes the detection algorithm more versatile. Thereafter, in the detection step, top-hat and bottom-hat transformations are used, assisted by the Overlapped Segmentation-Based Morphological Operation (OSBMO). Following the detection phase, the background regions are obliterated through an analysis of the motion feature points of the obtained object regions using a method that is a conjugation between the Kanade–Lucas–Tomasi (KLT) trackers and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering. The procured object features are clustered into separate objects on the basis of their motion characteristics. Finally, the vehicle labels are designated to their corresponding cluster trajectories by employing an efficient reinforcement connecting algorithm. The policy-making possibilities of the reinforcement connecting algorithm are evaluated. The Fast Regional Convolutional Neural Network (Fast-RCNN) is designed and trained on a small collection of samples, then utilised for removing the wrong targets. The proposed framework was tested on videos acquired through various scenarios. The methodology illustrates its capacity through the automatic supervision of target vehicles in real-world trials, which demonstrates its potential applications in intelligent transport systems and other surveillance applications.

Keywords