Applied Sciences (Nov 2023)

Multi-Sensor Data Fusion Method Based on Self-Attention Mechanism

  • Xuezhu Lin,
  • Shihan Chao,
  • Dongming Yan,
  • Lili Guo,
  • Yue Liu,
  • Lijuan Li

DOI
https://doi.org/10.3390/app132111992
Journal volume & issue
Vol. 13, no. 21
p. 11992

Abstract

Read online

In 3D reconstruction tasks, single-sensor data fusion based on deep learning is limited by the integrity and accuracy of the data, which reduces the accuracy and reliability of the fusion results. To address this issue, this study proposes a multi-sensor data fusion method based on a self-attention mechanism. A multi-sensor data fusion model for acquiring multi-source and multi-modal data is constructed, with the core component being a convolutional neural network with self-attention (CNN-SA), which employs CNNs to process multi-source and multi-modal data by extracting their features. Additionally, it introduces an SA mechanism to weigh and sum the features of different modalities, adaptively focusing on the importance of different modal data. This enables mutual support, complementarity, and correction among the multi-modal data. Experimental results demonstrate that the accuracy of the CNN-SA network is improved by 72.6%, surpassing the improvements of 29.9% for CNN-CBAM, 23.6% for CNN, and 11.4% for CNN-LSTM, exhibiting enhanced generalization capability, accuracy, and robustness. The proposed approach will contribute to the effectiveness of multi-sensor data fusion processing.

Keywords