Scientific Reports (Feb 2025)
Unsupervised translation of vascular masks to NIR-II fluorescence images using Attention-Guided generative adversarial networks
Abstract
Abstract The second near-infrared window (NIR-II) fluorescence imaging is a crucial technology for investigating the structure and functionality of blood vessels. However, challenges arise from privacy concerns and the significant effort needed for data annotation, complicating the acquisition of near-infrared vascular imaging datasets. To tackle these issues, methods based on deep learning for data synthesis have demonstrated promise in generating high-quality synthetic images. In this paper, we propose an unsupervised generative adversarial network (GAN) approach for translating vascular masks into realistic NIR-II fluorescence vascular images. Leveraging an attention mechanism integrated into the loss function, our model focuses on essential features during the generation process, resulting in high-quality NIRII images without the need for supervision. Our method significantly outperforms eight baseline techniques in both visual quality and quantitative metrics, demonstrating its potential to address the challenge of limited datasets in NIR-II medical imaging. This work not only enhances the applications of NIR-II imaging but also facilitates downstream tasks by providing abundant, high-fidelity synthetic data.
Keywords