Gradient Descent in Python: Implementation and Theory

python_tutorials

Introduction

This tutorial is an introduction to a simple optimization technique called gradient descent, which has seen major application in state-of-the-art machine learning models.

We’ll develop a general purpose routine to implement gradient descent and apply it to solve different problems, including classification via supervised learning.

In this process, we’ll gain an insight into the working of this algorithm and study the effect of various hyper-parameters on its performance. We’ll also go over batch and stochastic gradient descent variants as examples.

What is Gradient Descent?

Gradient descent is an optimization technique that can find the minimum of an objective function. It is a greedy technique that finds the optimal solution by taking a step in the direction of the maximum rate of decrease of the function.

By contrast, Gradient Ascent is a close counterpart that finds the maximum of a function by following the direction of the

 

 

To finish reading, please visit source site