How to Evaluate Gradient Boosting Models with XGBoost in Python

Last Updated on August 27, 2020

The goal of developing a predictive model is to develop a model that is accurate on unseen data.

This can be achieved using statistical techniques where the training dataset is carefully used to estimate the performance of the model on new and unseen data.

In this tutorial you will discover how you can evaluate the performance of your gradient boosting models with XGBoost in Python.

After completing this tutorial, you will know.

  • How to evaluate the performance of your XGBoost models using train and test datasets.
  • How to evaluate the performance of your XGBoost models using k-fold cross validation.

Kick-start your project with my new book XGBoost With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Jan/2017: Updated to reflect changes in scikit-learn API version 0.18.1.
  • Update Mar/2018: Added alternate link to download the dataset as the original appears to have been taken down.
How to Evaluate Gradient Boosting Models with XGBoost in Python

How to Evaluate Gradient Boosting Models with
To finish reading, please visit source site