How to Configure the Gradient Boosting Algorithm

Last Updated on August 15, 2020

Gradient boosting is one of the most powerful techniques for applied machine learning and as such is quickly becoming one of the most popular.

But how do you configure gradient boosting on your problem?

In this post you will discover how you can configure gradient boosting on your machine learning problem by looking at configurations reported in books, papers and as a result of competitions.

After reading this post, you will know:

  • How to configure gradient boosting according to the original sources.
  • Ideas for configuring the algorithm from defaults and suggestions in standard implementations.
  • Rules of thumb for configuring gradient boosting and XGBoost from a top Kaggle competitors.

Kick-start your project with my new book XGBoost With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

How to Configure the Gradient Boosting Algorithm

How to Configure the Gradient Boosting Algorithm
Photo by Chris Sorge, some rights reserved.

Need help with XGBoost in Python?

Take my free 7-day email course and discover
To finish reading, please visit source site