IEEE Access (Jan 2020)
Visual Sentiment Analysis With Active Learning
Abstract
Visual Sentiment Analysis (VSA) has attracted wide attention since more and more people are willing to express their emotion and opinions via visual contents on social media. Meanwhile, extensive datasets drive the rapid development of deep neural networks for this task. However, the annotation of large-scale datasets is very expensive and time consuming. In this paper, we propose a novel active learning framework, which uses few labeled training samples to achieve an effective sentiment analysis model. First, we attach a new branch to the traditional Convolution Neural Network (CNN), which is named ”texture module”. The affective vector will be obtained by computing inner products of feature maps from different convolutional blocks in this branch. We will utilize this vector to distinguish affective images. Second, the query strategy is formed by the classification scores from both the traditional CNN and the texture module. Then, we can use samples obtained by utilizing the query strategy to train our model. Extensive experiments on four public affective datasets show that our approach uses few labeled training samples to achieve promising results for VSA.
Keywords