Difference Between Backpropagation and Stochastic Gradient Descent

Last Updated on February 1, 2021

There is a lot of confusion for beginners around what algorithm is used to train deep learning neural network models.

It is common to hear neural networks learn using the “back-propagation of error” algorithm or “stochastic gradient descent.” Sometimes, either of these algorithms is used as a shorthand for how a neural net is fit on a training dataset, although in many cases, there is a deep confusion as to what these algorithms are, how they are related, and how they might work together.

This tutorial is designed to make the role of the stochastic gradient descent and back-propagation algorithms clear in training between networks.

In this tutorial, you will discover the

 

 

To finish reading, please visit source site