IEEE Access (Jan 2020)
Leveraging Pre-Trained Language Model for Summary Generation on Short Text
Abstract
Bidirectional Encoder Representations from Transformers represents the latest incarnation of pre-trained language models which have been obtained a satisfactory effect in text summarization tasks. However, it has not achieved good results for the generation of Chinese short text summaries. In this work, we propose a novel short text summary generation model based on keyword templates, which uses templates found in training data to extract keywords to guide summary generation. The experimental results of the LCSTS data set show that our model performs better than the baseline model. The analysis shows that the methods used in our model can generate high-quality summaries.
Keywords