Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Institute of Electronics, Chinese Academy of Sciences, Beijing, China
Guandong Xu
Advanced Analytics Institute, University of Technology of Sydney, Sydney, NSW, Australia
Xiao Liang
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing, China
Guangluan Xu
Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Institute of Electronics, Chinese Academy of Sciences, Beijing, China
Feng Li
Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Institute of Electronics, Chinese Academy of Sciences, Beijing, China
Kun Fu
Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Institute of Electronics, Chinese Academy of Sciences, Beijing, China
Lei Wang
Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Institute of Electronics, Chinese Academy of Sciences, Beijing, China
Tinglei Huang
Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Institute of Electronics, Chinese Academy of Sciences, Beijing, China
Relation detection plays a crucial role in knowledge base question answering, and it is challenging because of the high variance of relation expression in real-world questions. Traditional relation detection models based on deep learning follow an encoding-comparing paradigm, where the question and the candidate relation are represented as vectors to compare their semantic similarity. Max- or average-pooling operation, which is used to compress the sequence of words into fixed-dimensional vectors, becomes the bottleneck of information flow. In this paper, we propose an attention-based word-level interaction model (ABWIM) to alleviate the information loss issue caused by aggregating the sequence into a fixed-dimensional vector before the comparison. First, attention mechanism is adopted to learn the soft alignments between words from the question and the relation. Then, fine-grained comparisons are performed on the aligned words. Finally, the comparison results are merged with a simple recurrent layer to estimate the semantic similarity. Besides, a dynamic sample selection strategy is proposed to accelerate the training procedure without decreasing the performance. Experimental results of relation detection on both SimpleQuestions and WebQuestions datasets show that ABWIM achieves the state-of-the-art accuracy, demonstrating its effectiveness.