Get the Most out of LSTMs on Your Sequence Prediction Problem

Last Updated on August 14, 2019

Long Short-Term Memory (LSTM) Recurrent Neural Networks are a powerful type of deep learning suited for sequence prediction problems.

A possible concern when using LSTMs is if the added complexity of the model is improving the skill of your model or is in fact resulting in lower skill than simpler models.

In this post, you will discover simple experiments you can run to ensure you are getting the most out of LSTMs on your sequence prediction problem.

After reading this post, you will know:

  • How to test if your model is exploiting order dependence in your input data.
  • How to test if your model is harnessing memory in your LSTM model.
  • How to test if your model is harnessing BPTT when fitting your model.

Kick-start your project with my new book Long Short-Term Memory Networks With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s dive in.

Get the Most out of LSTMs on Your Sequence Prediction Problem

Get the Most out of LSTMs on
To finish reading, please visit source site