IEEE Access (Jan 2020)

Multi-Task Learning Model Based on Multi-Scale CNN and LSTM for Sentiment Classification

  • Ning Jin,
  • Jiaxian Wu,
  • Xiang Ma,
  • Ke Yan,
  • Yuchang Mo

DOI
https://doi.org/10.1109/ACCESS.2020.2989428
Journal volume & issue
Vol. 8
pp. 77060 – 77072

Abstract

Read online

Sentiment classification is an interesting and crucial research topic in the field of natural language processing (NLP). Data-driven methods, including machine learning and deep learning techniques, provide one direct and effective solution to solve the sentiment classification problem. However, the classification performance declines when the input includes review comments for multiple tasks. The most appropriate way of constructing a sentiment classification model under multi-tasking circumstances remains questionable in the related field. In this study, aiming at the multi-tasking sentiment classification problem, we propose a multi-task learning model based on a multi-scale convolutional neural network (CNN) and long short term memory (LSTM) for multi-task multi-scale sentiment classification (MTL-MSCNN-LSTM). The model comprehensively utilizes and properly handles global features and local features of different scales of text to model and represent sentences. The multi-task learning framework improves the encoder quality, simultaneously improving the results of emotion classification. Six different types of commodity review datasets were employed in the experiment. Using accuracy and F1-score as the metrics to evaluate the performance of the proposed model, comparing with methods such as single-task learning and LSTM encoder, the proposed MTL-MSCNN-LSTM model outperforms most of the existing methods.

Keywords