Train Neural Networks With Noise to Reduce Overfitting

Last Updated on August 6, 2019

Training a neural network with a small dataset can cause the network to memorize all training examples, in turn leading to overfitting and poor performance on a holdout dataset.

Small datasets may also represent a harder mapping problem for neural networks to learn, given the patchy or sparse sampling of points in the high-dimensional input space.

One approach to making the input space smoother and easier to learn is to add noise to inputs during training.

In this post, you will discover that adding noise to a neural network during training can improve the robustness of the network, resulting in better generalization and faster learning.

After reading this post, you will know:

  • Small datasets can make learning challenging for neural nets and the examples can be memorized.
  • Adding noise during training can make the training process more robust and reduce generalization error.
  • Noise is traditionally added to the inputs, but can also be added to weights, gradients, and even activation functions.

Kick-start your project with my new book Better Deep Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

To finish reading, please visit source site