Carbon Trends (Apr 2021)

Impact of training and validation data on the performance of neural network potentials: A case study on carbon using the CA-9 dataset

  • Daniel Hedman,
  • Tom Rothe,
  • Gustav Johansson,
  • Fredrik Sandin,
  • J. Andreas Larsson,
  • Yoshiyuki Miyamoto

Journal volume & issue
Vol. 3
p. 100027

Abstract

Read online

The use of machine learning to accelerate computer simulations is on the rise. In atomistic simulations, the use of machine learning interatomic potentials (ML-IAPs) can significantly reduce computational costs while maintaining accuracy close to that of ab initio methods. To achieve this, ML-IAPs are trained on large datasets of images, which are atomistic configurations labeled with data from ab initio calculations. Focusing on carbon, we use deep learning to train neural network potentials (NNPs), a form of ML-IAP, based on the state-of-the-art end-to-end NNP architecture SchNet and investigate how the choice of training and validation data affects the performance of the NNPs. Training is performed on the CA-9 dataset, a 9-carbon allotrope dataset constructed using data obtained via ab initio molecular dynamics (AIMD). Our results show that image generation with AIMD causes a high degree of similarity between the generated images, which has a detrimental effect on the performance of the NNPs. But by carefully choosing which images from the dataset are included in the training and validation data, this effect can be mitigated. We conclude by benchmarking our trained NNPs in applications such as relaxation and phonon calculation, where we can reproduce ab initio results with high accuracy.

Keywords