Issue #40 – Consistency by Agreement in Zero-shot Neural MT

06 Jun19 Issue #40 – Consistency by Agreement in Zero-shot Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic In two of our earlier posts (Issues #6 and #37), we discussed the zero-shot approach to Neural MT – learning to translate from source to target without seeing even a single example of the language pair directly. In Neural MT, the zero-shot training is achieved using multilingual architecture (Johnson et al. 2017) – a single NMT engine that can translate between […]

Read more

Issue #39 – Context-aware Neural Machine Translation

30 May19 Issue #39 – Context-aware Neural Machine Translation Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic Back in Issue #15, we looked at the topic of document-level translation and the idea of looking at more context than just the sentence when machine translating. In this post, we will have a look more generally at the role of context in machine translation as relates to specific types of linguistic phenomena and issues related to them. We review the work […]

Read more

Issue #37 – Zero-shot Neural MT as Domain Adaptation

16 May19 Issue #37 – Zero-shot Neural MT as Domain Adaptation Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic Zero-shot machine translation – a topic we first covered in Issue #6 –  is the idea that you can have a single MT engine that can translate between multiple languages. Such multilingual Neural MT systems can be built by simply concatenating parallel sentence pairs in several language directions and only adding a token in the source side indicating to which […]

Read more

Issue #36 – Average Attention Network for Neural MT

09 May19 Issue #36 – Average Attention Network for Neural MT Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic In Issue#32, we covered the Transformer model for neural machine translation which is the state of the art in neural MT. In this post we explore a technique presented by Zhang et. al. 2018, which modifies the transformer model and speeds up the translation process by 4-7 times across a range of different engines. Where is the bottleneck? In the […]

Read more

Issue #35 – Text Repair Model for Neural Machine Translation

02 May19 Issue #35 – Text Repair Model for Neural Machine Translation Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic Neural machine translation engines produce systematic errors which are not always easy to detect and correct in an end-to-end framework with millions of hidden parameters. One potential way to resolve these issues is doing so after the fact – correcting the errors by post-processing the output with an automatic post-editing (APE) step. This week we take a look at […]

Read more

Issue #34 – Non-Parametric Domain Adaptation for Neural MT

25 Apr19 Issue #34 – Non-Parametric Domain Adaptation for Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic In a few of our earlier posts (Issues #9 and #19) we discussed the topic of domain adaptation – the process of developing and adapting machine translation engines for specific industries, content types, and use cases – in the context of Neural MT. In general, domain adaptation methods require retraining of neural models, using in-domain data or infusing domain information at the […]

Read more

Issue #32 – The Transformer Model: State-of-the-art Neural MT

04 Apr19 Issue #32 – The Transformer Model: State-of-the-art Neural MT Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic In this post, we will discuss the Transformer model (Vaswani et al. 2017), which is a state-of-the-art model for Neural MT. The Transformer model was published by Google Brain and Google Research teams in June 2017 and has been a very popular architecture since then. It does not use either Recurrent Neural Networks (RNN) or Convolutional Neural Networks (CNN). Instead, […]

Read more

Issue #31 – Context-aware Neural MT

28 Mar19 Issue #31 – Context-aware Neural MT Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic In this week’s post, we take a look at ‘context-aware’ machine translation. This particular topic deals with how Neural MT engines can make use of external information to determine what translation to product – “external information” meaning information other than the words in the sentence being translated. Other modalities, for instance speech, images, and videos, or even other sentences in the source document […]

Read more

Issue #30 – Reducing loss of meaning in Neural MT

28 Mar19 Issue #30 – Reducing loss of meaning in Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic An important, and perhaps obvious feature of high-quality machine translation systems is that they preserve the meaning of the source in the translation. That is to say, if we have two different source sentences with slightly different meanings, we should have slightly different translations. However, this nuance can be a challenge, even for state-of-the-art systems, particularly in cases where source […]

Read more

Issue #28 – Hybrid Unsupervised Machine Translation

07 Mar19 Issue #28 – Hybrid Unsupervised Machine Translation Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic In Issue #11 of this series, we first looked directly at the topic of unsupervised machine translation – training an engine without any parallel data. Since then, it has gone from a promising concept, to one that can produce effective systems that perform close to the level of fully supervised engines (trained with parallel data). The prospect of building good MT engines […]

Read more
1 856 857 858 859 860 861