Symmetry (Aug 2024)
Research on Unsupervised Feature Point Prediction Algorithm for Multigrid Image Stitching
Abstract
The conventional feature point-based image stitching algorithm exhibits inconsistencies in the quality of feature points across diverse scenes. This may result in the deterioration of the alignment effect or even the inability to align two images. To address this issue, this paper presents an unsupervised multigrid image alignment method that integrates the conventional feature point-based image alignment algorithm with deep learning techniques. The method postulates that the feature points are uniformly distributed in the image and employs a deep learning network to predict their displacements, thereby enhancing the robustness of the feature points. Furthermore, the precision of image alignment is enhanced through the parameterization of APAP (As-projective-as-possible image stitching with moving DLT) multigrid deformation. Ultimately, based on the symmetry exhibited by the homography matrix and its inverse matrix throughout the projection process, image chunking inverse warping is introduced to obtain the stitched images for the multigrid deep learning network. Additionally, the mesh shape-preserving loss is introduced to constrain the shape of the multigrid. The experimental results demonstrate that in the real-world UDIS-D dataset, the method achieves notable improvements in feature point matching and homography estimation tasks, and exhibits superior alignment performance on the traditional image stitching dataset.
Keywords