AI Open (Jan 2024)

Label-aware debiased causal reasoning for Natural Language Inference

  • Kun Zhang,
  • Dacao Zhang,
  • Le Wu,
  • Richang Hong,
  • Ye Zhao,
  • Meng Wang

Journal volume & issue
Vol. 5
pp. 70 – 78

Abstract

Read online

Recently, researchers have argued that the impressive performance of Natural Language Inference (NLI) models is highly due to the spurious correlations existing in training data, which makes models vulnerable and poorly generalized. Some work has made preliminary debiased attempts by developing data-driven interventions or model-level debiased learning. Despite the progress, existing debiased methods either suffered from the high cost of data annotation processing, or required elaborate design to identify biased factors. By conducting detailed investigations and data analysis, we argue that label information can provide meaningful guidance to identify these spurious correlations in training data, which has not been paid enough attention. Thus, we design a novel Label-aware Debiased Causal Reasoning Network (LDCRN). Specifically, according to the data analysis, we first build a causal graph to describe causal relations and spurious correlations in NLI. Then, we employ an NLI model (e.g., RoBERTa) to calculate total causal effect of input sentences to labels. Meanwhile, we design a novel label-aware biased module to model spurious correlations and calculate their causal effect in a fine-grained manner. The debiasing process is realized by subtracting this causal effect from total causal effect. Finally, extensive experiments over two well-known NLI datasets and multiple human-annotated challenging test sets are conducted to prove the superiority of LDCRN. Moreover, we have developed novel challenging test sets based on MultiNLI to facilitate the community.

Keywords