Scientific Reports (Sep 2022)

Dual attention network for unsupervised medical image registration based on VoxelMorph

  • Yong-xin Li,
  • Hui Tang,
  • Wei Wang,
  • Xiu-feng Zhang,
  • Hang Qu

DOI
https://doi.org/10.1038/s41598-022-20589-7
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 11

Abstract

Read online

Abstract An accurate medical image registration is crucial in a variety of neuroscience and clinical studies. In this paper, we proposed a new unsupervised learning network, DAVoxelMorph to improve the accuracy of 3D deformable medical image registration. Based on the VoxelMorph model, our network presented two modifications, one is adding a dual attention architecture, specifically, we model semantic correlation on spatial and coordinate dimensions respectively, and the location attention module selectively aggregates the features of each location by weighting the features of all locations. The coordinate attention module further puts the location information into the channel attention. The other is introducing the bending penalty as regularization in the loss function to penalize the bending in the deformation field. Experimental results show that DAVoxelMorph achieved better registration performance including average Dice scores (0.714) and percentage of locations with non-positive Jacobian (0.345) compare with VoxelMorph (0.703, 0.355), CycleMorph (0.705, 0.133), ANTs SyN (0.707, 0.137) and NiftyReg (0.694, 0.549). Our model increases both model sensitivity and registration accuracy.