Mathematics (Oct 2022)

Residual-Prototype Generating Network for Generalized Zero-Shot Learning

  • Zeqing Zhang,
  • Xiaofan Li,
  • Tai Ma,
  • Zuodong Gao,
  • Cuihua Li,
  • Weiwei Lin

DOI
https://doi.org/10.3390/math10193587
Journal volume & issue
Vol. 10, no. 19
p. 3587

Abstract

Read online

Conventional zero-shot learning aims to train a classifier on a training set (seen classes) to recognize instances of novel classes (unseen classes) by class-level semantic attributes. In generalized zero-shot learning (GZSL), the classifier needs to recognize both seen and unseen classes, which is a problem of extreme data imbalance. To solve this problem, feature generative methods have been proposed to make up for the lack of unseen classes. Current generative methods use class semantic attributes as the cues for synthetic visual features, which can be considered mapping of the semantic attribute to visual features. However, this mapping cannot effectively transfer knowledge learned from seen classes to unseen classes because the information in the semantic attributes and the information in visual features are asymmetric: semantic attributes contain key category description information, while visual features consist of visual information that cannot be represented by semantics. To this end, we propose a residual-prototype-generating network (RPGN) for GZSL that extracts the residual visual features from original visual features by an encoder–decoder and synthesizes the prototype visual features associated with semantic attributes by a disentangle regressor. Experimental results show that the proposed method achieves competitive results on four GZSL benchmark datasets with significant gains.

Keywords