ITM Web of Conferences (Jan 2024)

An Empirical Study on the Correlation between Early Stopping Patience and Epochs in Deep Learning

  • Hussein Bootan M.,
  • Shareef Shareef M.

DOI
https://doi.org/10.1051/itmconf/20246401003
Journal volume & issue
Vol. 64
p. 01003

Abstract

Read online

Early stopping is a technique used to prevent overfitting in deep learning models by stopping the training process when the validation loss stops improving. The optimal number of epochs to train a model depends on various factors, including the patience value used in early stopping. In this study, we investigated the correlation between early stopping patience and the number of epochs in deep learning models. We conducted experiments using a convolutional neural network on the CIFAR-10 dataset with varying patience values and a fixed number of epochs. Our results show that the optimal number of epochs to train the model depends on the patience value used in early stopping. Higher patience values generally require more epochs to achieve the best validation accuracy, while lower patience values may result in premature stopping and suboptimal performance. However, longer training times do not necessarily improve validation accuracy, and early stopping can effectively prevent overfitting. Our findings suggest that the choice of patience value and number of epochs should be carefully considered when training deep learning models, and that early stopping can be an effective technique for preventing overfitting and improving model performance.