IEEE Access (Jan 2018)

Integration of Image Feature and Word Relevance: Toward Automatic Image Annotation in Cyber-Physical-Social Systems

  • Zhaolong Ning,
  • Guanghai Zhou,
  • Zhikui Chen,
  • Qiucen Li

DOI
https://doi.org/10.1109/ACCESS.2018.2864332
Journal volume & issue
Vol. 6
pp. 44190 – 44198

Abstract

Read online

Image annotation is challenging due to the explosive increase of image data in cyber-physical-social systems. Because of the semantic gap between images and corresponding labels, it has attracted extensive attentions in recent years. However, most existing research neglects the imbalanced distribution of different classes and the internal relevance of image labels. Besides, the weak image labeling affects the annotation performance to some extent. To address these issues, we propose a learning model for image annotation through integrating deep features and label relevance of images. Specifically, we first employ a convolutional neural-network approach to extract the deep features of images and utilize the synthetic minority oversampling technique to deal with the problem of class imbalance. Furthermore, we exploit the correlations, including symbiotic and semantic relationships of labels, to compute the relevance of label sets. Then, we incorporate this relevance into one classifier to reconstruct the complete label sets, and learn the mapping from image features to the reconstructed label sets by the other classifier. In addition, a joint convex loss function is proposed, which combines the two classifiers via co-regularization and compels them to be consistent. We evaluate the proposed method on two benchmark data sets. The experimental results demonstrate that our method outperforms several state-of-the-art solutions.

Keywords