Issue #85 – Applying Terminology Constraints in Neural MT

11 Jun20 Issue #85 – Applying Terminology Constraints in Neural MT Author: Dr. Chao-Hong Liu, Machine Translation Scientist @ Iconic Introduction Maintaining consistency of terminology translation in Neural Machine Translation (NMT) is a more challenging task than in Statistical MT (SMT). In this post, we review a method proposed by Dinu et al. (2019) to train NMT to use custom terminology. Translation with Terminology Constraints Applying terminology constraints to translation may appear to be an easy task. It is a […]

Read more

Issue #79 -Merging Terminology into Neural Machine Translation

23 Apr20 Issue #79 -Merging Terminology into Neural Machine Translation Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic After several years being the state of the art in Machine Translation, neural MT still doesn’t have a convenient way to enforce the translation of custom terms according to a glossary. In issue #7, we reviewed several approaches to handle terminology in neural MT. Just adding the glossary to the training data is not effective. Replacing the source term by a […]

Read more

Issue #41 – Deep Transformer Models for Neural MT

13 Jun19 Issue #41 – Deep Transformer Models for Neural MT Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic The Transformer is a state-of-the-art Neural MT model, as we covered previously in Issue #32. So what happens when something works well with neural networks? We try to go wider and deeper! There are two research directions that look promising to enhance the Transformer model: building wider networks by increasing the size of word representation and attention vectors, or building […]

Read more

Issue #40 – Consistency by Agreement in Zero-shot Neural MT

06 Jun19 Issue #40 – Consistency by Agreement in Zero-shot Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic In two of our earlier posts (Issues #6 and #37), we discussed the zero-shot approach to Neural MT – learning to translate from source to target without seeing even a single example of the language pair directly. In Neural MT, the zero-shot training is achieved using multilingual architecture (Johnson et al. 2017) – a single NMT engine that can translate between […]

Read more

Issue #39 – Context-aware Neural Machine Translation

30 May19 Issue #39 – Context-aware Neural Machine Translation Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic Back in Issue #15, we looked at the topic of document-level translation and the idea of looking at more context than just the sentence when machine translating. In this post, we will have a look more generally at the role of context in machine translation as relates to specific types of linguistic phenomena and issues related to them. We review the work […]

Read more

Issue #37 – Zero-shot Neural MT as Domain Adaptation

16 May19 Issue #37 – Zero-shot Neural MT as Domain Adaptation Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic Zero-shot machine translation – a topic we first covered in Issue #6 –  is the idea that you can have a single MT engine that can translate between multiple languages. Such multilingual Neural MT systems can be built by simply concatenating parallel sentence pairs in several language directions and only adding a token in the source side indicating to which […]

Read more

Issue #36 – Average Attention Network for Neural MT

09 May19 Issue #36 – Average Attention Network for Neural MT Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic In Issue#32, we covered the Transformer model for neural machine translation which is the state of the art in neural MT. In this post we explore a technique presented by Zhang et. al. 2018, which modifies the transformer model and speeds up the translation process by 4-7 times across a range of different engines. Where is the bottleneck? In the […]

Read more

Issue #35 – Text Repair Model for Neural Machine Translation

02 May19 Issue #35 – Text Repair Model for Neural Machine Translation Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic Neural machine translation engines produce systematic errors which are not always easy to detect and correct in an end-to-end framework with millions of hidden parameters. One potential way to resolve these issues is doing so after the fact – correcting the errors by post-processing the output with an automatic post-editing (APE) step. This week we take a look at […]

Read more

Issue #34 – Non-Parametric Domain Adaptation for Neural MT

25 Apr19 Issue #34 – Non-Parametric Domain Adaptation for Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic In a few of our earlier posts (Issues #9 and #19) we discussed the topic of domain adaptation – the process of developing and adapting machine translation engines for specific industries, content types, and use cases – in the context of Neural MT. In general, domain adaptation methods require retraining of neural models, using in-domain data or infusing domain information at the […]

Read more

Issue # 33 – Neural MT for Spoken Language Translation

11 Apr19 Issue # 33 – Neural MT for Spoken Language Translation Author: Dr. Marco Turchi, Head of the Machine Translation group at Fondazione Bruno Kessler – FBK , Italy This week, we have a guest post from Dr. Marco Turchi, head of the Machine Translation Group at Fondazione Bruno Kessler (FBK) in Italy. Marco and his team have a very strong pedigree in the field, and have been at the forefront of academic research into Neural MT. In this issue, […]

Read more
1 2 3