Applied Sciences (Mar 2022)

Exploiting Diverse Information in Pre-Trained Language Model for Multi-Choice Machine Reading Comprehension

  • Ziwei Bai,
  • Junpeng Liu,
  • Meiqi Wang,
  • Caixia Yuan,
  • Xiaojie Wang

DOI
https://doi.org/10.3390/app12063072
Journal volume & issue
Vol. 12, no. 6
p. 3072

Abstract

Read online

Answering different multi-choice machine reading comprehension (MRC) questions generally requires different information due to the abundant diversity of the questions, options and passages. Recently, pre-trained language models which provide rich information have been widely used to address MRC tasks. Most of the existing work only focuses on the output representation at the top layer of the models; the subtle and beneficial information provided by the intermediate layers is ignored. This paper therefore proposes a multi-decision based transformer model that builds multiple decision modules by utilizing the outputs at different layers to confront the various questions and passages. To avoid the information diversity in different layers being damaged during fine-tuning, we also propose a learning rate decaying method to control the updating speed of the parameters in different blocks. Experimental results on multiple publicly available datasets show that our model can answer different questions by utilizing the representation in different layers and speed up the inference procedure with considerable accuracy.

Keywords