A Gentle Introduction to Early Stopping to Avoid Overtraining Neural Networks

Last Updated on August 6, 2019

A major challenge in training neural networks is how long to train them.

Too little training will mean that the model will underfit the train and the test sets. Too much training will mean that the model will overfit the training dataset and have poor performance on the test set.

A compromise is to train on the training dataset but to stop training at the point when performance on a validation dataset starts to degrade. This simple, effective, and widely used approach to training neural networks is called early stopping.

In this post, you will discover that stopping the training of a neural network early before it has overfit the training dataset can reduce overfitting and improve the generalization of deep neural networks.

After reading this post, you will know:

  • The challenge of training a neural network long enough to learn the mapping, but not so long that it overfits the training data.
  • Model performance on a holdout validation dataset can be monitored during training and training stopped when generalization error starts to increase.
  • The use of early stopping requires the selection of a performance measure to monitor, a trigger to
    To finish reading, please visit source site