Encoder-Decoder Recurrent Neural Network Models for Neural Machine Translation

Last Updated on August 7, 2019

The encoder-decoder architecture for recurrent neural networks is the standard neural machine translation method that rivals and in some cases outperforms classical statistical machine translation methods.

This architecture is very new, having only been pioneered in 2014, although, has been adopted as the core technology inside Google’s translate service.

In this post, you will discover the two seminal examples of the encoder-decoder model for neural machine translation.

After reading this post, you will know:

  • The encoder-decoder recurrent neural network architecture is the core technology inside Google’s translate service.
  • The so-called “Sutskever model” for direct end-to-end machine translation.
  • The so-called “Cho model” that extends the architecture with GRU units and an attention mechanism.

Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Encoder-Decoder Recurrent Neural Network Models for Neural Machine Translation

Encoder-Decoder Recurrent Neural Network Models for Neural Machine Translation
Photo by Fabio Pani, some rights reserved.

Encoder-Decoder Architecture for
To finish reading, please visit source site