Implementation Patterns for the Encoder-Decoder RNN Architecture with Attention
Last Updated on August 14, 2019 The encoder-decoder architecture for recurrent neural networks is proving to be powerful on a host of sequence-to-sequence prediction problems in the field of natural language processing. Attention is a mechanism that addresses a limitation of the encoder-decoder architecture on long sequences, and that in general speeds up the learning and lifts the skill of the model on sequence-to-sequence prediction problems. In this post, you will discover patterns for implementing the encoder-decoder model with and […]
Read more