Issue #28 – Hybrid Unsupervised Machine Translation

07 Mar19

Issue #28 – Hybrid Unsupervised Machine Translation

Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic

In Issue #11 of this series, we first looked directly at the topic of unsupervised machine translation – training an engine without any parallel data. Since then, it has gone from a promising concept, to one that can produce effective systems that perform close to the level of fully supervised engines (trained with parallel data). The prospect of building good MT engines with only monolingual data has a tremendous potential impact since it opens up possibilities of applying MT in many new scenarios, particularly for low-resource languages,

The growing interest in this topic has been reflected in this blog, as this issue is actually the fourth article that deals with the idea, even dating back before Issue #11. This week, we’ll give a quick recap of what has been done, before we take a look at the paper by Artetxe et. al. (2019), which proposes an effective approach to build an unsupervised neural MT engine initialised by an improved phrase-based unsupervised MT engine.

Recap

When translating in a direction in which there is no parallel training data, the
To finish reading, please visit source site

Leave a Reply