IEEE Access (Jan 2022)
Super-Resolution Reconstruction of 3T-Like Images From 0.35T MRI Using a Hybrid Attention Residual Network
Abstract
Magnetic resonance (MR) images from low-field scanners present poorer signal-to-noise ratios (SNRs) than those from high-field scanners at the same spatial resolution. To obtain a clinically acceptable SNR, radiologists operating the low-field scanners use a much smaller acquisition matrix than high-field scanners. Thus, the current state of the image quality indicates the need for further research to improve the image quality of low-field systems. Strategies based on super-resolution (SR) techniques can be alternatives for image reconstruction. However, predetermined degradation methods embedded in these techniques, such as bicubic downsampling, seem to impose a performance drop when the actual degradation is different from the pre-defined assumption. In this study, we collected a unique dataset by scanning 70 participants to address this problem. The anatomical locations of the scanned image slices were the same for 0.35T and 3T data. Low-resolution (LR) images (0.35T) and high-resolution (HR) images (3T) were the image pairs used for data training. Herein, we introduce a novel CNN-based network with hybrid attention mechanisms (HybridAttentionResNet, HARN) to adaptively capture diverse information and reconstruct super-resolution 0.35T MR images (3T-like MR images). Specifically, the proposed dense block combines variant dense blocks and attention blocks to extract abundant features from LR images. The experimental results demonstrate that our proposed residual network efficiently recovers significant textures while rendering a high peak signal-to-noise ratio (PSNR) and an appealing structural similarity index (SSIM). Moreover, an extensive subjective-mean-opinion-score (SMOS) proves to be promising in the clinical application using HARN.
Keywords