Issue #85 – Applying Terminology Constraints in Neural MT

11 Jun20 Issue #85 – Applying Terminology Constraints in Neural MT Author: Dr. Chao-Hong Liu, Machine Translation Scientist @ Iconic Introduction Maintaining consistency of terminology translation in Neural Machine Translation (NMT) is a more challenging task than in Statistical MT (SMT). In this post, we review a method proposed by Dinu et al. (2019) to train NMT to use custom terminology. Translation with Terminology Constraints Applying terminology constraints to translation may appear to be an easy task. It is a […]

Read more

Issue #84 – Are Neural Machine Translation Systems Good Estimators of Quality?

04 Jun20 Issue #84 – Are Neural Machine Translation Systems Good Estimators of Quality? Author: Prof. Lucia Specia, Professor of Natural Language Processing, Imperial College London (also to ADAPT/Dublin City University and University of Sheffield) This week, we are delighted to have a guest post from Prof. Lucia Specia of Imperial College London, and laterally the University of Sheffield and our own alma mater, Dublin City University. Prof. Specia is one of the world’s preeminent experts on the topic of […]

Read more

Issue #82 – Constrained Decoding using Levenshtein Transformer

14 May20 Issue #82 – Constrained Decoding using Levenshtein Transformer Author: Raj Patel, Machine Translation Scientist @ Iconic Introduction In constrained decoding, we force in-domain terminology to appear in the final translation. We have previously discussed constrained decoding in earlier blog posts (#7, #9, #79). In this blog post, we will discuss a simple and effective algorithm for incorporating lexical constraints in Neural Machine Translation (NMT) proposed by Susanto et al. (2020) and try to understand how it is better than […]

Read more

Issue #81 – Evaluating Human-Machine Parity in Language Translation: part 2

07 May20 Issue #81 – Evaluating Human-Machine Parity in Language Translation: part 2 Author: Dr. Sheila Castilho, Post-Doctoral Researcher @ ADAPT Research Centre This is the second in a 2-part post addressing machine translation quality evaluation – an overarching topic regardless of the underlying algorithms. Following our own summary last week, this week we are delighted to have one of the paper’s authors, Dr. Sheila Castilho, give her take on the paper, their motivations for writing it, and where we […]

Read more

Issue #73 – Mixed Multi-Head Self-Attention for Neural MT

12 Mar20 Issue #73 – Mixed Multi-Head Self-Attention for Neural MT Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic Self-attention is a key component of the Transformer, a state-of-the-art neural machine translation architecture. In the Transformer, self-attention is divided into multiple heads to allow the system to independently attend to information from different representation subspaces. Recently it has been shown that some redundancy occurs in the multiple heads. In this post, we take a look at approaches which ensure […]

Read more

Issue #68 – Incorporating BERT in Neural MT

07 Feb20 Issue #68 – Incorporating BERT in Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic BERT (Bidirectional Encoder Representations from Transformers) has shown impressive results in various Natural Language Processing (NLP) tasks. However, how to effectively apply BERT in Neural MT has not been fully explored. In general, BERT is used as fine-tuning for downstream NLP tasks. For Neural MT, a pre-trained BERT model is used to initialise the encoder in an encoder-decoder architecture. In this post we […]

Read more

Issue #66 – Neural Machine Translation Strategies for Low-Resource Languages

23 Jan20 Issue #66 – Neural Machine Translation Strategies for Low-Resource Languages This week we are pleased to welcome the newest member to our scientific team, Dr. Chao-Hong Liu. In this, his first post with us, he’ll give his views on two specific MT strategies, namely, pivot MT and zero-shot MT. While we have covered these topics in previous ‘Neural MT Weekly’ blog posts (Issue #54, Issue #40), these are topics that Chao-Hong has recently worked on prior to joining […]

Read more

Issue #64 – Neural Machine Translation with Byte-Level Subwords

13 Dec19 Issue #64 – Neural Machine Translation with Byte-Level Subwords Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic In order to limit vocabulary, most neural machine translation engines are based on subwords. In some settings, character-based systems are even better (see issue #60). However, rare characters in noisy data or character-based languages can unnecessarily take up vocabulary slots and limit its compactness. In this post we take a look at an alternative, proposed by Wang et al. (2019), […]

Read more

Issue #62 – Domain Differential Adaptation for Neural MT

28 Nov19 Issue #62 – Domain Differential Adaptation for Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic Neural MT models are data hungry and domain sensitive, and it is nearly impossible to obtain a good amount ( >1M segments) of training data for every domain we are interested in. One common strategy is to align the statistics of the source and target domain, but the drawback of this approach is that the statistics of the different domains are inherently […]

Read more

Issue #60 – Character-based Neural Machine Translation with Transformers

14 Nov19 Issue #60 – Character-based Neural Machine Translation with Transformers Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic We saw in issue #12 of this blog how character-based recurrent neural networks (RNNs) could outperform (sub)word-based models if the network is deep enough. However, character sequences are much longer than subword ones, which is not easy to deal with in  RNNs. In this post, we discuss how the Transformer architecture changes the situation for character-based models. We take a […]

Read more
1 854 855 856 857 858 860