Intelligent Systems with Applications (Nov 2023)

Robustness-Eva-MRC: Assessing and analyzing the robustness of neural models in extractive machine reading comprehension

  • Jingliang Fang,
  • Hua Xu,
  • Zhijing Wu,
  • Kai Gao,
  • Xiaoyin Che,
  • Haotian Hui

Journal volume & issue
Vol. 20
p. 200287

Abstract

Read online

Deep neural networks, despite their remarkable success in various language understanding tasks, have been found vulnerable to adversarial attacks and subtle input perturbations, revealing a robustness shortfall. To explore this, this paper presents Robustness-Eva-MRC, an interactive platform designed to assess and analyze the robustness of pre-trained and large-scale language models in extractive machine reading comprehension (MRC) tasks. The platform integrates eight adversarial attack methods across character-, word-, and sentence-levels, and applies them to five MRC datasets, thereby fabricating challenging adversarial testing sets. Then it evaluates the MRC models on both original and adversarial sets, yielding insights into their robustness through performance gaps. Moreover, Robustness-Eva-MRC provides comprehensive visualizations and detailed case studies, enhancing the understanding of model robustness. A screencast video and additional material are available at https://github.com/distantJing/Robustness-Eva-MRC.

Keywords