PLoS ONE (Jan 2025)

UFM: Unified feature matching pre-training with multi-modal image assistants.

  • Yide Di,
  • Yun Liao,
  • Hao Zhou,
  • Kaijun Zhu,
  • Qing Duan,
  • Junhui Liu,
  • Mingyu Lu

DOI
https://doi.org/10.1371/journal.pone.0319051
Journal volume & issue
Vol. 20, no. 3
p. e0319051

Abstract

Read online

Image feature matching, a foundational task in computer vision, remains challenging for multimodal image applications, often necessitating intricate training on specific datasets. In this paper, we introduce a Unified Feature Matching pre-trained model (UFM) designed to address feature matching challenges across a wide spectrum of modal images. We present Multimodal Image Assistant (MIA) transformers, finely tunable structures adept at handling diverse feature matching problems. UFM exhibits versatility in addressing both feature matching tasks within the same modal and those across different modals. Additionally, we propose a data augmentation algorithm and a staged pre-training strategy to effectively tackle challenges arising from sparse data in specific modals and imbalanced modal datasets. Experimental results demonstrate that UFM excels in generalization and performance across various feature matching tasks. The code will be released at: https://github.com/LiaoYun0x0/UFM.