IEEE Access (Jan 2023)

Robust Contrastive Learning With Dynamic Mixed Margin

  • Junhyuk So,
  • Yongtaek Lim,
  • Yewon Kim,
  • Changdae Oh,
  • Kyungwoo Song

DOI
https://doi.org/10.1109/ACCESS.2023.3286931
Journal volume & issue
Vol. 11
pp. 60211 – 60222

Abstract

Read online

One of the promising ways for the representation learning is contrastive learning. It enforces that positive pairs become close while negative pairs become far. Contrastive learning utilizes the relative proximity or distance between positive and negative pairs. However, contrastive learning might fail to handle the easily distinguished positive-negative pairs because the gradient of easily divided positive-negative pairs comes to vanish. To overcome the problem, we propose a dynamic mixed margin (DMM) loss that generates the augmented hard positive-negative pairs that are not easily clarified. DMM generates hard positive-negative pairs by interpolating the dataset with Mixup. Besides, DMM adopts the dynamic margin incorporating the interpolation ratio, and dynamic adaptation improves representation learning. DMM encourages making close for positive pairs far away, whereas making a little far for strongly nearby positive pairs alleviates overfitting. Our proposed DMM is a plug-and-play module compatible with diverse contrastive learning loss and metric learning. We validate that the DMM is superior to other baselines on various tasks, video-text retrieval, and recommender system task in unimodal and multimodal settings. Besides, representation learned from DMM shows better robustness even if the modality missing occurs that frequently appears on the real-world dataset. Implementation of DMM at downstream tasks is available here: https://github.com/teang1995/DMM

Keywords