IEEE Access (Jan 2024)

DRUformer: Enhancing Driving Scene Important Object Detection With Driving Scene Relationship Understanding

  • Yingjie Niu,
  • Ming Ding,
  • Keisuke Fujii,
  • Kento Ohtani,
  • Alexander Carballo,
  • Kazuya Takeda

DOI
https://doi.org/10.1109/ACCESS.2024.3400589
Journal volume & issue
Vol. 12
pp. 67589 – 67599

Abstract

Read online

Traffic accidents frequently lead to fatal injuries, claiming millions of lives every year. To mitigate driving hazards and ensure personal safety, it is crucial to assist vehicles in anticipating the objects in the traffic scene (treated here as important objects) which may pose a threat during the driving task. Previous research on important object detection primarily assessed the importance of individual participants, treating them as independent entities and frequently neglecting the interconnections among these participants. Unfortunately, this approach has proven less effective in detecting important objects in complex scenarios. In this work, we introduce Driving scene Relationship Understanding transformer (DRUformer), designed to enhance the important object detection task. The DRUformer is a transformer-based multi-modal important object detection model that takes into account the relationships between all the participants in the driving scenario. Recognizing that driving intention also significantly affects the detection of important objects during driving, we have incorporated a module for embedding driving intention. To assess the performance of our approach, we conducted comparative experiments on the DRAMA dataset, comparing our model against other state-of-the-art (SOTA) models. The results demonstrated a noteworthy 16.2% improvement in mIoU and a substantial 12.3% boost in ACC compared to SOTA methods. Furthermore, we conducted a qualitative analysis of our model’s ability to detect important objects across different road scenarios and classes, highlighting its effectiveness in diverse contexts. Finally, we conducted various ablation studies to assess the efficiency of the proposed modules in our DRUformer model. Through extensive experimentation, it has been demonstrated that our model performs exceptionally well in the task of driving scene important object localization. The code is publicly available on the following link: https://github.com/oniu-uin0/DRUformer

Keywords