IEEE Access (Jan 2023)

Remaining Useful Life Estimation in Prognostics Using Deep Reinforcement Learning

  • Qiankun Hu,
  • Yongping Zhao,
  • Yuqiang Wang,
  • Pei Peng,
  • Lihua Ren

DOI
https://doi.org/10.1109/ACCESS.2023.3263196
Journal volume & issue
Vol. 11
pp. 32919 – 32934

Abstract

Read online

In modern industrial systems, condition-based maintenance (CBM) has been wildly adopted as an efficient maintenance strategy. Prognostics, as a key enabler of CBM, involves the kernel task of estimating the remaining useful life (RUL) for engineered systems. Much research in recent years has focused on developing new machine learning (ML) based approaches for RUL estimation. A variety of ML algorithms have been employed in these approaches. However, there was no research on applying deep reinforcement learning (DRL) to RUL estimation. To fill this research gap, a novel DRL based prognostic approach is proposed for RUL estimation in this paper. In the proposed approach, the conventional RUL estimation task is first formulated into a Markov decision process (MDP) model. Then an advanced DRL algorithm is employed to learn the optimal RUL estimation policy from this MDP environment. The effectiveness and superiority of the proposed approach are demonstrated through a case study on turbofan engines in C-MAPSS dataset. Compared to other approaches, the proposed approach obtains superior performance on all four sub-datasets of C-MAPSS dataset. What is more, on the most complicated sub-datasets FD002 and FD004, the RMSE metric is improved by 14.4% and 7.81%, and the score metric is improved by 3.7% and 48.79%, respectively.

Keywords