Recommendations for Deep Learning Neural Network Practitioners

Last Updated on August 6, 2019

Deep learning neural networks are relatively straightforward to define and train given the wide adoption of open source libraries.

Nevertheless, neural networks remain challenging to configure and train.

In his 2012 paper titled “Practical Recommendations for Gradient-Based Training of Deep Architectures” published as a preprint and a chapter of the popular 2012 book “Neural Networks: Tricks of the Trade,” Yoshua Bengio, one of the fathers of the field of deep learning, provides practical recommendations for configuring and tuning neural network models.

In this post, you will step through this long and interesting paper and pick out the most relevant tips and tricks for modern deep learning practitioners.

After reading this post, you will know:

  • The early foundations for the deep learning renaissance including pretraining and autoencoders.
  • Recommendations for the initial configuration for the range of neural network hyperparameters.
  • How to effectively tune neural network hyperparameters and tactics to tune models more efficiently.

Kick-start your project with my new book Better Deep Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Practical Recommendations for Deep
<a href=To finish reading, please visit source site