How to Reduce Generalization Error With Activity Regularization in Keras

Last Updated on August 25, 2020

Activity regularization provides an approach to encourage a neural network to learn sparse features or internal representations of raw observations.

It is common to seek sparse learned representations in autoencoders, called sparse autoencoders, and in encoder-decoder models, although the approach can also be used generally to reduce overfitting and improve a model’s ability to generalize to new observations.

In this tutorial, you will discover the Keras API for adding activity regularization to deep learning neural network models.

After completing this tutorial, you will know:

  • How to create vector norm regularizers using the Keras API.
  • How to add activity regularization to MLP, CNN, and RNN layers using the Keras API.
  • How to reduce overfitting by adding activity regularization to an existing model.

Kick-start your project with my new book Better Deep Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Updated Oct/2019: Updated for Keras 2.3 and TensorFlow 2.0.
How to Reduce Generalization Error in Deep Neural Networks With Activity Regularization in Keras

To finish reading, please visit source site