IEEE Access (Jan 2023)

GLFormer: An Efficient Transformer Network for Fast Magnetic Resonance Imaging Reconstruction

  • Rongqing Wang,
  • Mengdie Song,
  • Jiantai Zhou,
  • Bensheng Qiu

DOI
https://doi.org/10.1109/ACCESS.2023.3300789
Journal volume & issue
Vol. 11
pp. 83209 – 83220

Abstract

Read online

Deep learning (DL)-based methods substantially enhance the speed of magnetic resonance imaging (MRI). Recently, transformer network architectures have been increasingly applied to image reconstruction owing to their exceptional ability to model long-range dependencies. However, directly employing a transformer network for MRI reconstruction results in a considerable computational burden because the computational complexity of the transformer is proportional to the square of the image spatial resolution. To alleviate this limitation, this study aims to design a computationally efficient transformer network with improved reconstruction performance. The proposed network, termed the global-local-transformer (GLFormer), is based on a multi-input multi-output architecture consisting of three components. A simplified self-attention, global attention is designed to extract the long-range dependency using a global pooling operator while maintaining linear complexity. Furthermore, depth convolution is incorporated into a feedforward network (FFN) to perform local feature aggregation, and a parallel-gated branch is designed for the FFN, thereby enhancing the effectiveness of representation learning and improving the reconstruction performance. To enhance the ability of the network to perceive frequency information, a deep frequency attention module is proposed to adaptively decompose and adjust frequency domain features, thereby enhancing the reconstruction performance. Experiments conducted on public datasets indicate that GLFormer outperforms state-of-the-art DL-based methods for different undersampling rates and types of undersampling patterns. Furthermore, GLFormer only exploits fewer model parameters and has a lower computational burden (i.e., 2.4 M and 19G) than the previous methods, while maintaining high reconstruction quality.

Keywords