A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning

Last Updated on August 15, 2020

Gradient boosting is one of the most powerful techniques for building predictive models.

In this post you will discover the gradient boosting machine learning algorithm and get a gentle introduction into where it came from and how it works.

After reading this post, you will know:

  • The origin of boosting from learning theory and AdaBoost.
  • How gradient boosting works including the loss function, weak learners and the additive model.
  • How to improve performance over the base algorithm with various regularization schemes.

Kick-start your project with my new book XGBoost With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning

A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning
Photo by brando.n, some rights reserved.

Need help with XGBoost in Python?

Take my free 7-day email course and discover xgboost (with sample code).

Click to sign-up now and also get a free PDF Ebook version of the course.

To finish reading, please visit source site