IEEE Access (Jan 2024)

Core-View Contrastive Learning Network for Building Lightweight Cross-Domain Consultation System

  • Jiabin Zheng,
  • Fangyi Xu,
  • Wei Chen,
  • Zihao Fang,
  • Jiahui Yao

DOI
https://doi.org/10.1109/ACCESS.2024.3395330
Journal volume & issue
Vol. 12
pp. 65615 – 65629

Abstract

Read online

Cross-domain Consultation Systems have become essential in numerous critical applications, for instance, an online citizen complaint system. However, addressing complaints with distinct orality characteristics often necessitates retrieving and integrating knowledge from diverse professional domains. This scenario represents a typical cross-domain problem. Nevertheless, the prevailing approach of utilizing generative large language models to tackle this problem presents challenges including model scale and drawbacks like hallucination and limited interpretability. To address these challenges, we proposed a novel approach called the Core-View Contrastive Learning (CVCL) network. Leveraging contrastive learning techniques with an integrated core-adaptive augmentation module, the CVCL network achieves accuracy in cross-domain information matching. Our objective is to construct a lightweight, precise, and interpretable cross-domain consultation system, overcoming the limitations encountered with large language models in addressing such challenges. Empirical validation of our proposed method using real-world datasets demonstrates its effectiveness. Our experiments show that the proposed method achieves comparable performance to large language models in terms of accuracy in text-matching tasks and surpasses the best baseline model by over 24 percentage points in F1-score for classification tasks. Additionally, our lightweight model achieved a performance level of 96% compared to the full model, while utilizing only 6% of the parameters.

Keywords