IEEE Access (Jan 2021)

Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy

  • Yu-Ying Chen,
  • Chiao-Ting Chen,
  • Chuan-Yun Sang,
  • Yao-Chun Yang,
  • Szu-Hao Huang

DOI
https://doi.org/10.1109/ACCESS.2021.3068768
Journal volume & issue
Vol. 9
pp. 50667 – 50685

Abstract

Read online

Many researchers have incorporated deep neural networks (DNNs) with reinforcement learning (RL) in automatic trading systems. However, such methods result in complicated algorithmic trading models with several defects, especially when a DNN model is vulnerable to malicious adversarial samples. Researches have rarely focused on planning for long-term attacks against RL-based trading systems. To neutralize these attacks, researchers must consider generating imperceptible perturbations while simultaneously reducing the number of modified steps. In this research, an adversary is used to attack an RL-based trading agent. First, we propose an extension of the ensemble of the identical independent evaluators (EIIE) method, called enhanced EIIE, in which information on the best bids and asks is incorporated. Enhanced EIIE was demonstrated to produce an authoritative trading agent that yields better portfolio performance relative to that of an EIIE agent. Enhanced EIIE was then applied to the adversarial agent for the agent to learn when and how much to attack (in the form of introducing perturbations).In our experiments, our proposed adversarial attack mechanisms were > 30% more effective at reducing accumulated portfolio value relative to the conventional attack mechanisms of the fast gradient sign method (FSGM) and iterative FSGM, which are currently more commonly researched and adapted to compare and improve.

Keywords