IEEE Access (Jan 2024)

Convolutional Concatenation Fusion Imaging Method for ERT/UTT Dual-Modality Tomography

  • Feng Li,
  • Yushan Liu,
  • Liangliang Xu,
  • Zhimin Qiao,
  • Yongwei Li

DOI
https://doi.org/10.1109/ACCESS.2024.3447477
Journal volume & issue
Vol. 12
pp. 118099 – 118108

Abstract

Read online

A convolutional concatenation fusion imaging network is proposed for Electrical resistance tomography (ERT) and ultrasonic transmission tomography (UTT) dual-modality fusion imaging, containing four stages: initial imaging, concatenation fusion, feature extraction, and image reconstruction. The fusion of heterogeneous sensors is completed through the initial imaging, and the measured voltage of ERT and the measured sound pressure of UTT are converted into pixel information with the same dimension. The feature information of ERT and UT is sequentially concatenated into the feature vector with the same scale to form a dual-modality shared semantic space, which realizes the fusion of dual-modality feature information. The feature extraction process consists of two convolution blocks and a spatial pyramid pooling block, which completes the self-mining and extraction of multiscale feature. The process of image reconstruction is composed of two deconvolution modules, which complete the reconstruction of multi-medium distribution on the basis of extracted feature. A set of numerical and experimental tests are carried out to evaluate the performance of the proposed method. The results show that the proposed ERT/UTT dual-modality imaging method has obvious improvement in imaging accuracy compared to the dual-modality methods WAF and single-modality methods ERT-CIN and UTT-CIN.

Keywords