Unsupervised learning of charge-discharge cycles from various lithium-ion battery cells to visualize dataset characteristics and to interpret model performance
Akihiro Yamashita,
Sascha Berg,
Egbert Figgemeier
Affiliations
Akihiro Yamashita
Helmholtz Institute Münster: Ionics in Energy Storage (IMD-4 / HI MS), Forschungszentrum Jülich, Jülich, Germany; Corresponding authors at: Forschungszentrum Jülich c/o RWTH Aachen University, Campus-Boulevard 89, 52074 Aachen, Germany.
Sascha Berg
Helmholtz Institute Münster: Ionics in Energy Storage (IMD-4 / HI MS), Forschungszentrum Jülich, Jülich, Germany; Institute for Power Electronics and Electrical Drives (ISEA), RWTH Aachen University, Aachen, Germany
Egbert Figgemeier
Helmholtz Institute Münster: Ionics in Energy Storage (IMD-4 / HI MS), Forschungszentrum Jülich, Jülich, Germany; Institute for Power Electronics and Electrical Drives (ISEA), RWTH Aachen University, Aachen, Germany; Jülich Aachen Research Alliance, JARA-Energy, Germany; Corresponding authors at: Forschungszentrum Jülich c/o RWTH Aachen University, Campus-Boulevard 89, 52074 Aachen, Germany.
Machine learning (ML) is a rapidly growing tool even in the lithium-ion battery (LIB) research field. To utilize this tool, more and more datasets have been published. However, applicability of a ML model to different information sources or various LIB cell types has not been well studied. In this paper, an unsupervised learning model called variational autoencoder (VAE) is evaluated with three datasets of charge-discharge cycles with different conditions. The model was first trained with a publicly available dataset of commercial cylindrical cells, and then evaluated with our private datasets of commercial pouch and hand-made coin cells. These cells used different chemistry and were tested with different cycle testers under different purposes, which induces various characteristics to each dataset. We report that researchers can recognise these characteristics with VAE to plan a proper data preprocessing. We also discuss about interpretability of a ML model.