Issue #38 – Incremental Interlingua-based Neural Machine Translation
23 May19
Issue #38 – Incremental Interlingua-based Neural Machine Translation
Author: Dr. Marta R. Costa-jussà, a Ramón y Cajal Researcher, TALP Research Center, Universitat Politècnica de Catalunya, Barcelona
This week, we have a guest post from Marta R. Costa-jussà, a Ramón & Cajal Researcher from the TALP Research Center at the Universitat Politècnica de Catalunya, in Barcelona. In Issue #37 we saw that, in order for zero-shot translation to work well, we must be able to encode the source text into a language-independent representation, and to decode from this common representation to the target language. In this week’s issue, Marta gives us more insight on this topic, and explains how to build such a system incrementally.
Introduction
Multilingual Neural Machine Translation is a standard practice nowadays. A typical architecture includes one universal encoder and decoder that are fed with multiple languages in training which allows for zero-shot translation in inference. The decoder is told which language to translate by simply recognising a tag in the source sentence that has this information. An alternative to this architecture is the use of multiple encoders and decoders for each language and sharing an attention layer which becomes the
To finish reading, please visit source site