Issue #55 – Word Alignment from Neural Machine Translation

10 Oct19

Issue #55 – Word Alignment from Neural Machine Translation

Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic

Word alignments were the cornerstone of all previous approaches to statistical MT. You take your parallel corpus, align the words, and build from there. In Neural MT however, word alignment is no longer needed as an input of the system. That being said, research is coming back around to the idea that it remains useful in real-world practical scenarios for tasks such as replacing tags in MT output.

Conveniently, current Neural MT engines can extract word alignment from the attention weights. Unfortunately, its quality is worse than external word alignments produced from traditional approaches of SMT, because attending to the context words rather than the aligned source words might be useful for translation. In this post, we take a look at two papers proposing a method which improves word alignment extracted by the Transformer models in Neural MT.

Although both papers report results with the Alignment Error Rate metric (AER), which has been proved to be inadequate to measure alignment error (see Fraser and Marcu, 2007 or Lambert et al.,
To finish reading, please visit source site

Leave a Reply