How to Develop a Seq2Seq Model for Neural Machine Translation in Keras

Last Updated on August 7, 2019

The encoder-decoder model provides a pattern for using recurrent neural networks to address challenging sequence-to-sequence prediction problems, such as machine translation.

Encoder-decoder models can be developed in the Keras Python deep learning library and an example of a neural machine translation system developed with this model has been described on the Keras blog, with sample code distributed with the Keras project.

In this post, you will discover how to define an encoder-decoder sequence-to-sequence prediction model for machine translation, as described by the author of the Keras deep learning library.

After reading this post, you will know:

  • The neural machine translation example provided with Keras and described on the Keras blog.
  • How to correctly define an encoder-decoder LSTM for training a neural machine translation model.
  • How to correctly define an inference model for using a trained encoder-decoder model to translate new sequences.

Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

How to Define an Encoder-Decoder Sequence-to-Sequence Model for Neural Machine Translation
<a href=To finish reading, please visit source site