Neural Machine Translation

Highlights from Machine Translation and Multilinguality in October 2022

Here are my monthly highlights from paper machine translation and multilinguality that appeared on arXiv, many of them preprints from the upcoming EMNLP conference. Folks from Amazon published a pre-print that introduces a simple method of how to make pre-trained multilingual representation more robust towards noisy inputs. It is a very straightforward approach: they sample typos based on Wikipedia logs and use those during model training. In addition, they add a contrastive loss that forces the noisy versions of sentences […]

Read more

Highlights from Machine Translation and Multilinguality 02/2022

After 100 MT Weekly posts (which took me 130 weeks to write), I realized that weekly blogging is impossible while weekly teaching. So I decided to change the format of the post and write monthly summaries of what I found most interesting in machine translation and multilinguality. This is the first issue that summarizes what interesting happened in February. Exciting news about WMT There will be some exciting changes in WMT competitions. WMT is an annual conference on machine translation […]

Read more

Highlights from Machine Translation and Multilinguality in March 2022

Here is a monthly summary of what I found most interesting on arXiv this month from machine translation and mutlilinguality. This month was the camera-ready deadline for ACL 2022, so many of the interesting papers are accepted to ACL. Overlapping BPE When training, BPE merges actually do not have to follow the simple objective of merging the most frequent token pair. In massively multilingual models, there is an imbalance between languages, and some of them got segmented almost down to […]

Read more

Highlights from Machine Translation and Multilinguality 04/2022

Another month is over, so here is my overview of what I found most interesting in machine translation and multilinguality. Rotation ciphers as regularizers A paper accepted to ACL 2022 from Simon Fraser University experiments with using rotation ciphers on the source side of MT as a data augmentation technique. They tested it in low data scenarios and it seems to work quite well, which actually seems quite strange to me. It’s just systematic replacing characters with different characters – […]

Read more

Highlights from Machine Translation and Multilinguality in May and June 2022

After a while, here is a dump of what I found most interesting on arXiv about machine translation and multilinguality, covering May and June of this year. Google Research published a pre-print of their NAACL paper: SCONES (Single-label Contrastive Objective for Non-Exclusive Sequences). The paper is about a simple trick: they replace softmax with binary classifiers with a sigmoid output and use the sum of binary cross-entropies as their loss function. It gets a slightly better BLEU and BLEURT score […]

Read more

Highlights from Machine Translation and Multilinguality in July 2022

Here is my monthly summary of what I found worth reading on arXiv in the past month. A preprint from JHU studies zero-shot cross-lingual transfer using pretrained multilingual representation and comes to the conclusion that it is an under-specified optimization problem. In other words, with a multilingual representation model, there are potentially many solutions that are good for the source language, but only some of them are good for the target language. In practice, the solution is probably proper training […]

Read more

Highlights from Machine Translation and Multilinguality in September 2022

Here are my monthly highlights from paper machine translation and multilinguality. A preprint from the Nara Institute of Science and Technology shows that target-language-specific fully connected layers in the Transformer decoder improve multilingual and zero-shot MT compared to the current practice of using a special token to indicate what the target language is. A very similar idea is also in a preprint from Tianjin University, but in this case, they add language-specific parameters for the other part of the Transformer […]

Read more

Machine Translation and Multilinguality in July 2022

Here is my monthly summary of what I found worth reading on arXiv in the past month. A preprint from JHU studies zero-shot cross-lingual transfer using pretrained multilingual representation and comes to the conclusion that it is an under-specified optimization problem. In other words, with a multilingual representation model, there are potentially many solutions that are good for the source language, but only some of them are good for the target language. In practice, the solution is probably proper training […]

Read more
1 2 3 11