Issue #62 – Domain Differential Adaptation for Neural MT

28 Nov19 Issue #62 – Domain Differential Adaptation for Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic Neural MT models are data hungry and domain sensitive, and it is nearly impossible to obtain a good amount ( >1M segments) of training data for every domain we are interested in. One common strategy is to align the statistics of the source and target domain, but the drawback of this approach is that the statistics of the different domains are inherently […]

Read more

Issue #61 – Context-Aware Monolingual Repair for Neural Machine Translation

21 Nov19 Issue #61 – Context-Aware Monolingual Repair for Neural Machine Translation Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic In issue #15 and issue #39 we looked at various approaches for document level translation. In this blog post, we will look at another approach proposed by Voita et. al (2019a) to capture context information. This approach is unique in the sense that it utilizes only target monolingual data to improve the discourse phenomenon  (deixis, ellipsis, lexical cohesion, ambiguity, […]

Read more

Issue #60 – Character-based Neural Machine Translation with Transformers

14 Nov19 Issue #60 – Character-based Neural Machine Translation with Transformers Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic We saw in issue #12 of this blog how character-based recurrent neural networks (RNNs) could outperform (sub)word-based models if the network is deep enough. However, character sequences are much longer than subword ones, which is not easy to deal with in  RNNs. In this post, we discuss how the Transformer architecture changes the situation for character-based models. We take a […]

Read more