The Astrophysical Journal Supplement Series (Jan 2025)
Astronomical Image Superresolution Reconstruction with Deep Learning for Better Identification of Interacting Galaxies
Abstract
Galaxy–galaxy mergers are crucial in galaxy evolution, but the tidal features around galaxies are often faint, making it difficult to identify interacting or merging galaxies. High-resolution images of galaxies can identify fine structures within galaxies, which are essential for identifying and distinguishing different substructures within merging systems. However, due to observational and instrumental limitations, galaxy data is often collected at low resolution. To further improve visual quality and enhance the details of galaxy structures, we propose a dual-branch network structure combining convolutional neural networks (CNNs) and Transformer (DBCTNet), which leverages the local characteristics of CNNs to complement the global features of Transformer. We select four representative models for comparative experiments: Real-ESRGAN, SwinIR, Hybrid Attention Transformer, and EDAT. In the experiment, we adopt a two-stage training strategy. The results from the first stage show that DBCTNet improves the peak signal-to-noise ratio by 0.13, 0.19, 0.12, and 0.11, respectively, and achieves the highest structural similarity index value of 0.5578. In the second stage, we use DBCTNet, trained in the first stage as the generator, to train the galaxy image superresolution reconstruction model based on generative adversarial networks, DBCTGAN, which aims to enhance the visual quality of the reconstructed images. In addition, we use superresolution methods as a preprocessing step in the task of interacting galaxy classification. Experimental results show that using DBCTGAN for preprocessing improves classification performance compared to other models, which further verifies its effectiveness in enhancing the quality of low-resolution images.
Keywords