How to use an Encoder-Decoder LSTM to Echo Sequences of Random Integers

Last Updated on August 27, 2020

A powerful feature of Long Short-Term Memory (LSTM) recurrent neural networks is that they can remember observations over long sequence intervals.

This can be demonstrated by contriving a simple sequence echo problem where the entire input sequence or partial contiguous blocks of the input sequence are echoed as an output sequence.

Developing LSTM recurrent neural networks to address the sequence echo problem is both a good demonstration of the power of LSTMs and can be used to demonstrate state-of-the-art recurrent neural network architectures.

In this post, you will discover how to develop LSTMs to solve the full and partial sequence echo problem in Python using the Keras deep learning library.

After completing this tutorial, you will know:

  • How to generate random sequences of integers, represent them using a one hot encoding and frame the sequence as a supervised learning problem with input and output pairs.
  • How to develop a sequence-to-sequence LSTM to echo the entire input sequence as an output.
  • How to develop an encoder-decoder LSTM to echo partial sequences with lengths that differ from the input sequence.

Kick-start your project with my new book Long Short-Term Memory Networks With Python,
To finish reading, please visit source site