Using Normalization Layers to Improve Deep Learning Models

You’ve probably been told to standardize or normalize inputs to your model to improve performance. But what is normalization and how can we implement it easily in our deep learning models to improve performance? Normalizing our inputs aims to create a set of features that are on the same scale as each other, which we’ll explore more in this article. Also, thinking about it, in neural networks, the output of each layer serves as the inputs into the next layer, […]

Read more

How To Improve Deep Learning Performance

Last Updated on August 6, 2019 20 Tips, Tricks and Techniques That You Can Use ToFight Overfitting and Get Better Generalization How can you get better performance from your deep learning model? It is one of the most common questions I get asked. It might be asked as: How can I improve accuracy? …or it may be reversed as: What can I do if my neural network performs poorly? I often reply with “I don’t know exactly, but I have […]

Read more

Gentle Introduction to the Adam Optimization Algorithm for Deep Learning

Last Updated on August 20, 2020 The choice of optimization algorithm for your deep learning model can mean the difference between good results in minutes, hours, and days. The Adam optimization algorithm is an extension to stochastic gradient descent that has recently seen broader adoption for deep learning applications in computer vision and natural language processing. In this post, you will get a gentle introduction to the Adam optimization algorithm for use in deep learning. After reading this post, you […]

Read more

How to Configure the Number of Layers and Nodes in a Neural Network

Last Updated on August 6, 2019 Artificial neural networks have two main hyperparameters that control the architecture or topology of the network: the number of layers and the number of nodes in each hidden layer. You must specify values for these parameters when configuring your network. The most reliable way to configure these hyperparameters for your specific predictive modeling problem is via systematic experimentation with a robust test harness. This can be a tough pill to swallow for beginners to […]

Read more

Use Weight Regularization to Reduce Overfitting of Deep Learning Models

Last Updated on August 6, 2019 Neural networks learn a set of weights that best map inputs to outputs. A network with large network weights can be a sign of an unstable network where small changes in the input can lead to large changes in the output. This can be a sign that the network has overfit the training dataset and will likely perform poorly when making predictions on new data. A solution to this problem is to update the […]

Read more

How to Use Weight Decay to Reduce Overfitting of Neural Network in Keras

Last Updated on August 25, 2020 Weight regularization provides an approach to reduce the overfitting of a deep learning neural network model on the training data and improve the performance of the model on new data, such as the holdout test set. There are multiple types of weight regularization, such as L1 and L2 vector norms, and each requires a hyperparameter that must be configured. In this tutorial, you will discover how to apply weight regularization to improve the performance […]

Read more

A Gentle Introduction to Weight Constraints in Deep Learning

Last Updated on August 6, 2019 Weight regularization methods like weight decay introduce a penalty to the loss function when training a neural network to encourage the network to use small weights. Smaller weights in a neural network can result in a model that is more stable and less likely to overfit the training dataset, in turn having better performance when making a prediction on new data. Unlike weight regularization, a weight constraint is a trigger that checks the size […]

Read more

How to Reduce Overfitting Using Weight Constraints in Keras

Last Updated on August 25, 2020 Weight constraints provide an approach to reduce the overfitting of a deep learning neural network model on the training data and improve the performance of the model on new data, such as the holdout test set. There are multiple types of weight constraints, such as maximum and unit vector norms, and some require a hyperparameter that must be configured. In this tutorial, you will discover the Keras API for adding weight constraints to deep […]

Read more

A Gentle Introduction to Activation Regularization in Deep Learning

Last Updated on August 6, 2019 Deep learning models are capable of automatically learning a rich internal representation from raw input data. This is called feature or representation learning. Better learned representations, in turn, can lead to better insights into the domain, e.g. via visualization of learned features, and to better predictive models that make use of the learned features. A problem with learned features is that they can be too specialized to the training data, or overfit, and not […]

Read more

How to Reduce Generalization Error With Activity Regularization in Keras

Last Updated on August 25, 2020 Activity regularization provides an approach to encourage a neural network to learn sparse features or internal representations of raw observations. It is common to seek sparse learned representations in autoencoders, called sparse autoencoders, and in encoder-decoder models, although the approach can also be used generally to reduce overfitting and improve a model’s ability to generalize to new observations. In this tutorial, you will discover the Keras API for adding activity regularization to deep learning […]

Read more
1 2 3 5