Multichannel Fully Convolutional Network for Coronary Artery Segmentation in X-Ray Angiograms
Jingfan Fan,
Jian Yang,
Yachen Wang,
Siyuan Yang,
Danni Ai,
Yong Huang,
Hong Song,
Aimin Hao,
Yongtian Wang
Affiliations
Jingfan Fan
Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, China
Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, China
Yachen Wang
Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, China
Siyuan Yang
Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, China
Danni Ai
Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, China
Yong Huang
Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, China
Hong Song
School of Software, Beijing Institute of Technology, Beijing, China
Aimin Hao
State Key Laboratory of Virtual Technology and Systems, Beihang University, Beijing, China
Yongtian Wang
Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, China
Accurate segmentation of coronary arteries in X-ray angiograms is an important step for the quantitative study of coronary artery disease. However, accurate segmentation is a challenging task because coronary arteries are thin tubular structures with relatively low contrast and the presence of artifacts. In this paper, a novel deep-learning-based method is proposed to automatically segment the coronary artery from angiograms by using multichannel fully convolutional networks. Since the artifacts appear in both live images (after the injection of contrast material) and mask images (before the injection of contrast material) and the blood vessels appear only in live images, we take the mask images into consideration to distinguish real blood vessel structures from artifacts. Therefore, both live images and mask images are used as multichannel inputs to provide enhanced vascular structure information. The hierarchical features are then automatically learned to characterize the spatial associations between vessel and background and are further used to achieve the final segmentation. In addition, a dense matching between the live image and mask image is processed for a precise initial alignment. The experimental results demonstrate that our method is effective and robust for coronary artery segmentation, compared with several state-of-the-art methods.