8 Tricks for Configuring Backpropagation to Train Better Neural Networks

Last Updated on August 6, 2019

Neural network models are trained using stochastic gradient descent and model weights are updated using the backpropagation algorithm.

The optimization solved by training a neural network model is very challenging and although these algorithms are widely used because they perform so well in practice, there are no guarantees that they will converge to a good model in a timely manner.

The challenge of training neural networks really comes down to the challenge of configuring the training algorithms.

In this post, you will discover tips and tricks for getting the most out of the backpropagation algorithm when training neural network models.

After reading this post, you will know:

  • The challenge of training a neural network is really the balance between learning the training dataset and generalizing to new examples beyond the training dataset.
  • Eight specific tricks that you can use to train better neural network models, faster.
  • Second order optimization algorithms that can also be used to train neural networks under certain circumstances.

Kick-start your project with my new book Better Deep Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

To finish reading, please visit source site