Applied Sciences (Aug 2020)

A2C: Attention-Augmented Contrastive Learning for State Representation Extraction

  • Haoqiang Chen,
  • Yadong Liu,
  • Zongtan Zhou,
  • Ming Zhang

DOI
https://doi.org/10.3390/app10175902
Journal volume & issue
Vol. 10, no. 17
p. 5902

Abstract

Read online

Reinforcement learning (RL) faces a series of challenges, including learning efficiency and generalization. The state representation used to train RL is one of the important factors causing these challenges. In this paper, we explore providing a more efficient state representation for RL. Contrastive learning is used as the representation extraction method in our work. We propose an attention mechanism implementation and extend an existing contrastive learning method by embedding the attention mechanism. Finally an attention-augmented contrastive learning method called A2C is obtained. As a result, using the state representation from A2C, the robot achieves better learning efficiency and generalization than those using state-of-the-art representations. Moreover, our attention mechanism is proven to be able to calculate the correlation of arbitrary distance among pixels, which is conducive to capturing more accurate obstacle information. What is more, we remove the attention mechanism from A2C. It is shown that the rewards available for the attention-removed A2C are reduced by more than 70%, which indicates the important role of the attention mechanism.

Keywords