Issue #52 – A Selection from ACL 2019

19 Sep19 Issue #52 – A Selection from ACL 2019 Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic The Conference of the Association for Computational Linguistics (ACL) took place this summer, and over the past few months we have reviewed a number of preprints (see Issues 28, 41 and 43) which were published at ACL. In this post, we take a look at three more papers presented at the conference, that we found particularly interesting, in the context of […]

Read more

Issue #48 – It’s all French Belgian Fries to me… or The Art of Multilingual e-Disclosure (Part II)

01 Aug19 Issue #48 – It’s all French Belgian Fries to me… or The Art of Multilingual e-Disclosure (Part II) Author: Jérôme Torres Lozano, Director of Professional Services, Inventus This is the second of a two-part guest post from Jérôme Torres Lozano, the Director of Professional Services at Inventus, who shares his perspective on The Art of Multilingual e-Disclosure. In Part I,  we learned about the challenges of languages in e-disclosure.  In this post he will discuss language identification and translation options available […]

Read more

Issue #47 – It’s all French Belgian Fries to me, or The Art of Multilingual e-Disclosure (Part I)

25 Jul19 Issue #47 – It’s all French Belgian Fries to me, or The Art of Multilingual e-Disclosure (Part I) Author: Jérôme Torres Lozano, Director of Professional Services, Inventus Over the next two weeks, we’re taking a slightly different approach on the blog. In today’s article, the first of two parts, we will hear from Jérôme Torres-Lozano of Inventus, a user of Iconic’s Neural MT solutions for e-discovery. He gives us an entertaining look at his experiences on the challenges of language, […]

Read more

Issue #46 – Augmenting Self-attention with Persistent Memory

18 Jul19 Issue #46 – Augmenting Self-attention with Persistent Memory Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic In Issue #32 we introduced the Transformer model as the new state-of-the-art in Neural Machine Translation. Subsequently, in Issue #41 we looked at some approaches that were aiming to improve upon it. In this post, we take a look at significant change in the Transformer model, proposed by Sukhbaatar et al. (2019), which further improves its performance. Each Transformer layer consists of two types […]

Read more

Issue #41 – Deep Transformer Models for Neural MT

13 Jun19 Issue #41 – Deep Transformer Models for Neural MT Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic The Transformer is a state-of-the-art Neural MT model, as we covered previously in Issue #32. So what happens when something works well with neural networks? We try to go wider and deeper! There are two research directions that look promising to enhance the Transformer model: building wider networks by increasing the size of word representation and attention vectors, or building […]

Read more

Issue #40 – Consistency by Agreement in Zero-shot Neural MT

06 Jun19 Issue #40 – Consistency by Agreement in Zero-shot Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic In two of our earlier posts (Issues #6 and #37), we discussed the zero-shot approach to Neural MT – learning to translate from source to target without seeing even a single example of the language pair directly. In Neural MT, the zero-shot training is achieved using multilingual architecture (Johnson et al. 2017) – a single NMT engine that can translate between […]

Read more

Issue #39 – Context-aware Neural Machine Translation

30 May19 Issue #39 – Context-aware Neural Machine Translation Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic Back in Issue #15, we looked at the topic of document-level translation and the idea of looking at more context than just the sentence when machine translating. In this post, we will have a look more generally at the role of context in machine translation as relates to specific types of linguistic phenomena and issues related to them. We review the work […]

Read more

Issue #37 – Zero-shot Neural MT as Domain Adaptation

16 May19 Issue #37 – Zero-shot Neural MT as Domain Adaptation Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic Zero-shot machine translation – a topic we first covered in Issue #6 –  is the idea that you can have a single MT engine that can translate between multiple languages. Such multilingual Neural MT systems can be built by simply concatenating parallel sentence pairs in several language directions and only adding a token in the source side indicating to which […]

Read more

Issue #36 – Average Attention Network for Neural MT

09 May19 Issue #36 – Average Attention Network for Neural MT Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic In Issue#32, we covered the Transformer model for neural machine translation which is the state of the art in neural MT. In this post we explore a technique presented by Zhang et. al. 2018, which modifies the transformer model and speeds up the translation process by 4-7 times across a range of different engines. Where is the bottleneck? In the […]

Read more

Issue #35 – Text Repair Model for Neural Machine Translation

02 May19 Issue #35 – Text Repair Model for Neural Machine Translation Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic Neural machine translation engines produce systematic errors which are not always easy to detect and correct in an end-to-end framework with millions of hidden parameters. One potential way to resolve these issues is doing so after the fact – correcting the errors by post-processing the output with an automatic post-editing (APE) step. This week we take a look at […]

Read more
1 855 856 857 858 859 860