IEEE Access (Jan 2024)

Electric Vehicle Battery State of Charge Estimation With an Ensemble Algorithm Using Central Difference Kalman Filter (CDKF) and Non-Linear Autoregressive With Exogenous Input (NARX)

  • Ayodeji S. Ogundana,
  • Pranaya K. Terala,
  • Migara Y. Amarasinghe,
  • Xuanchen Xiang,
  • Simon Y. Foo

DOI
https://doi.org/10.1109/ACCESS.2024.3371883
Journal volume & issue
Vol. 12
pp. 33705 – 33719

Abstract

Read online

Conventional model-based probabilistic inference methods require increasingly complex models to improve Electric Vehicle (EV) battery State of Charge (SOC) estimation. Deep learning methods gained popularity in recent years with their model free estimations. However, practical constraints such as insufficient training data, model complexity for real time implementation, and generalization on new dataset hinder performance reliability. Another major practical drawback of the data driven deep learning approach is its poor convergence from an unfamiliar initial error state as training dataset does not adequately accommodate these practical error scenarios. This paper proposes an ensemble method that uses a weighted estimate of the Central Difference Kalman Filter (CDKF) and Nonlinear Autoregressive with Exogenous Input (NARX) to accurately estimate SOC in the early stages of degradation. We employ a parallel ensemble estimation method that reduces the estimation bias, improves generalization, accuracy, robustness, and reliability of the estimator. We propose a pre-estimated voting weight to combine the ensemble algorithm and employ the CDKF covariance dependent method as the optimum approach for initializing the ensemble system to achieve a robust convergence performance. The state converges on an average of 136 time-steps when initialized halfway from the true state. The average Mean Absolute Error (MAE) performance of the ensemble method is about 0.5 % with an average training data of about 31,041 time-steps. The model was validated using conventional drive cycle data and was shown to outperform its individual ensemble members and gated Recurrent Neural Networks (RNN) such as Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM) and the Bidirectional LSTM (BiLSTM).

Keywords