IEEE Access (Jan 2019)

Resource Mention Extraction for MOOC Discussion Forums

  • Ya-Hui An,
  • Liangming Pan,
  • Min-Yen Kan,
  • Qiang Dong,
  • Yan Fu

DOI
https://doi.org/10.1109/ACCESS.2019.2924250
Journal volume & issue
Vol. 7
pp. 87887 – 87900

Abstract

Read online

In discussions hosted on discussion forums for massive online open courses (MOOCs), references to online learning resources are often of central importance. They contextualize the discussion, anchoring the discussion participants' presentation of the issues and their understanding. However, they are usually mentioned in free text, without appropriate hyperlinking to their associated resource. Automated learning resource mention hyperlinking and categorization will facilitate discussion and searching within the MOOC forums and also benefit the contextualization of such resources across disparate views. We propose the novel problem of learning resource mention identification in MOOC forums, i.e., to identify resource mentions in discussions and classify them into pre-defined resource types. As this is a novel task with no publicly available data, we first contribute a large-scale labeled dataset-dubbed the forum resource mention (FoRM) dataset-to facilitate our current research and future research on this task. The FoRM contains over 10 000 real-world forum threads in collaboration with Coursera, with more than 23 000 manually labeled resource mentions. We then formulate this task as a sequence tagging problem and investigate solution architectures to address the problem. Importantly, we identify two major challenges that hinder the applications of sequence tagging models to the task: (1) the diversity of resource mention expression and (2) long-range contextual dependencies. We address these challenges by incorporating character-level and thread context information into an LSTM-CRF model. First, we incorporate a character encoder to address the out-of-vocabulary problem caused by the diversity of mention expressions. Second, to address the context dependency challenge, we encode thread contexts using an RNN-based context encoder and apply the attention mechanism to selectively leverage useful context information during sequence tagging. The experiments on FoRM show that the proposed method improves the baseline deep sequence tagging models notably, significantly bettering performance on instances that exemplify two challenges.

Keywords