Weight Regularization with LSTM Networks for Time Series Forecasting

Last Updated on August 28, 2020

Long Short-Term Memory (LSTM) models are a recurrent neural network capable of learning sequences of observations.

This may make them a network well suited to time series forecasting.

An issue with LSTMs is that they can easily overfit training data, reducing their predictive skill.

Weight regularization is a technique for imposing constraints (such as L1 or L2) on the weights within LSTM nodes. This has the effect of reducing overfitting and improving model performance.

In this tutorial, you will discover how to use weight regularization with LSTM networks and design experiments to test for its effectiveness for time series forecasting.

After completing this tutorial, you will know:

  • How to design a robust test harness for evaluating LSTM networks for time series forecasting.
  • How to design, execute, and interpret the results from using bias weight regularization with LSTMs.
  • How to design, execute, and interpret the results from using input and recurrent weight regularization with LSTMs.

Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Updated Apr/2019: Updated the link to dataset.
To finish reading, please visit source site