How to Best Tune Multithreading Support for XGBoost in Python

Last Updated on August 27, 2020

The XGBoost library for gradient boosting uses is designed for efficient multi-core parallel processing.

This allows it to efficiently use all of the CPU cores in your system when training.

In this post you will discover the parallel processing capabilities of the XGBoost in Python.

After reading this post you will know:

  • How to confirm that XGBoost multi-threading support is working on your system.
  • How to evaluate the effect of increasing the number of threads on XGBoost.
  • How to get the most out of multithreaded XGBoost when using cross validation and grid search.

Kick-start your project with my new book XGBoost With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Jan/2017: Updated to reflect changes in scikit-learn API version 0.18.1.
How to Best Tune Multithreading Support for XGBoost in Python

How to Best Tune Multithreading Support for XGBoost in Python
Photo by Nicholas A. Tonelli, some rights reserved.

Need help with XGBoost in Python?

Take my free 7-day email
To finish reading, please visit source site