EURASIP Journal on Image and Video Processing (Jan 2019)

Semantic embeddings of generic objects for zero-shot learning

  • Tristan Hascoet,
  • Yasuo Ariki,
  • Tetsuya Takiguchi

DOI
https://doi.org/10.1186/s13640-018-0371-x
Journal volume & issue
Vol. 2019, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Zero-shot learning (ZSL) models use semantic representations of visual classes to transfer the knowledge learned from a set of training classes to a set of unknown test classes. In the context of generic object recognition, previous research has mainly focused on developing custom architectures, loss functions, and regularization schemes for ZSL using word embeddings as semantic representation of visual classes. In this paper, we exclusively focus on the affect of different semantic representations on the accuracy of ZSL. We first conduct a large scale evaluation of semantic representations learned from either words, text documents, or knowledge graphs on the standard ImageNet ZSL benchmark. We show that, using appropriate semantic representations of visual classes, a basic linear regression model outperforms the vast majority of previously proposed approaches. We then analyze the classification errors of our model to provide insights into the relevance and limitations of the different semantic representations we investigate. Finally, our investigation helps us understand the reasons behind the success of recently proposed approaches based on graph convolution networks (GCN) which have shown dramatic improvements over previous state-of-the-art models.

Keywords