International Journal of Computational Intelligence Systems (Apr 2023)

Feature Equilibrium: An Adversarial Training Method to Improve Representation Learning

  • Minghui Liu,
  • Meiyi Yang,
  • Jiali Deng,
  • Xuan Cheng,
  • Tianshu Xie,
  • Pan Deng,
  • Haigang Gong,
  • Ming Liu,
  • Xiaomin Wang

DOI
https://doi.org/10.1007/s44196-023-00229-2
Journal volume & issue
Vol. 16, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Over-fitting is a significant threat to the integrity and reliability of deep neural networks with generous parameters. One problem is that the model learned more specific features than general features in the training process. To solve the problem, we propose an adversarial training method to assist the model in strengthening general representation learning. In this method, we make a classification model as a generator G and introduce an unsupervised discriminator D to distinguish the hidden feature of the classification model from real images to limit their spatial distance. Notably, the D will fall into the trap of a perfect discriminator resulting in the gradient of confrontation loss of 0 after overtraining. To avoid this situation, we train the D with a probability $$P_{c}$$ P c . Our proposed method is easy to incorporate into existing frameworks. It has been evaluated under various network architectures over different fields of datasets. Experiments show that this method, under low computational cost, outperforms the benchmark by 1.5–2 points on different datasets. For semantic segmentation on VOC, our proposed method achieves 2.2 points higher mAP.

Keywords