Frontiers in Neurorobotics (Sep 2024)

Multi-modal remote perception learning for object sensory data

  • Nouf Abdullah Almujally,
  • Adnan Ahmed Rafique,
  • Naif Al Mudawi,
  • Abdulwahab Alazeb,
  • Mohammed Alonazi,
  • Asaad Algarni,
  • Ahmad Jalal,
  • Ahmad Jalal,
  • Hui Liu

DOI
https://doi.org/10.3389/fnbot.2024.1427786
Journal volume & issue
Vol. 18

Abstract

Read online

IntroductionWhen it comes to interpreting visual input, intelligent systems make use of contextual scene learning, which significantly improves both resilience and context awareness. The management of enormous amounts of data is a driving force behind the growing interest in computational frameworks, particularly in the context of autonomous cars.MethodThe purpose of this study is to introduce a novel approach known as Deep Fused Networks (DFN), which improves contextual scene comprehension by merging multi-object detection and semantic analysis.ResultsTo enhance accuracy and comprehension in complex situations, DFN makes use of a combination of deep learning and fusion techniques. With a minimum gain of 6.4% in accuracy for the SUN-RGB-D dataset and 3.6% for the NYU-Dv2 dataset.DiscussionFindings demonstrate considerable enhancements in object detection and semantic analysis when compared to the methodologies that are currently being utilized.

Keywords