Machine Translation Weekly 76: Zero-shot MT with pre-trained encoder

Using pre-trained multilingual representation as a universal encoder for machine translation might seem like an obvious thing to try: train a decoder into one target language using one or several source languages and you get a translation from 100 languages into the target language. This sounds great, but this is not how it works. (Or it works somehow, but not really well, I tried it myself.) Recently, I came across a pre-print where the authors figured out how to do […]

Read more

Machine Translation Weekly 75: Outbound Translation

This week, I will comment on a paper by my good old friends from Charles University in collaboration with the University of Edinburgh, the University of Sheffield, and the University of Tartu within the Bergamot project. The main goal of the project is to develop a high-quality machine translation that runs locally in an internet browser and unlike services such as Google Translate or Microsoft Translator does not send any (potentially sensitive) data to any server. This is a very […]

Read more

Machine Translation Weekly 74: Architectrues we will hear about in MT

This week, I would like to feature three recent papers with innovations in neural architectures that I think might become important in MT and multilingual NLP during the next year. But of course, I might be wrong, in MT Weekly 27, I self-assuredly claimed that the Reformer architecture will start an era of much larger models than we have now and will turn the attention of the community towards document-level problems and it seems it is not happening. CANINE: Tokenization-free […]

Read more

Machine Translation Weekly 73: Non-autoregressive MT with Latent Codes

Today, I will comment on a paper on non-autoregressive machine translation that shows a neat trick for increasing output fluency. The title of the paper is Non-Autoregressive Translation by Learning Target Categorical Codes, has authors from several Chinese private and public institutions and will appear at this year’s NAACL Conference. Unlike standard, so-called autoregressive encoder-decoder architectures that decode output sequentially (and in theory in linear time), non-autoregressive models generate all outputs in parallel (and in theory in constant time, regardless […]

Read more

Machine Translation Weekly 72: Self-Training for Zero-Shot MT

This week, I will have a look at a pre-print that describes an unconventional setup for zero-shot machine translation. The title of the pre-print is Self-Learning for Zero-Shot Neural Machine Translation and was written by authors from the University of Trento. First of all, I have some doubt about this being really an instance of zero-shot learning (but it is just nitpicking, the paper is interesting regardless of the terminology). In machine learning, zero-shot learning means that a model trained […]

Read more

Machine Translation Weekly 71: Explaining Random Feature Attention

Transformers are the neural architecture that underlies most of the current state-of-the-art machine translation and natural language processing in general. One of its major drawbacks is the quadratic complexity of the underlying self-attention mechanism, which in practice limits the sequence length that could be processed by Transformers. There already exist some tricks to deal with that. One of them is local sensitive hashing that was used in the Reformer architecture (see MT Weekly 27). The main idea was computing the […]

Read more

Machine Translation Weekly 70: Loss Masking instead of Data Filtering

This week, I will have a closer look at a recent pre-print introducing an alternative for parallel data filtering for machine translation training. The title of the pre-print is Gradient-guided Loss Masking for Neural Machine Translation and comes from CMU and Google. Training data cleanness is a surprisingly important factor for machine translation quality. A large part of the data that we use for training comes from crawling the Internet, so there is no quality guarantee. On the other hand, […]

Read more

Machine Translation Weekly 69: One-Short learning in MT

This week I will discuss a paper about the one-shot vocabulary learning abilities of machine translation. The title of the paper is Continuous Learning in Neural Machine Translation using Bilingual Dictionaries and will be presented at EACL in May this year. A very similar idea is also presented in a paper Facilitating Terminology Translation with Target Lemma Annotations that will be presented at the same conference. One-shot learning is the ability to learn from a single example. In the context […]

Read more

Machine Translation Weekly 68: Pre-editing of MT inputs

Today, I am going to comment on a paper that systematically explores something that probably many MT users do this is pre-editing (editing the source sentence) to get a better output of an MT that is treated as a black box. The title of the paper is Understanding Pre-Editing for Black-Box Neural Machine Translation by authors from Nagoya University and NICT in Japan and will appear at this year’s EACL. Pre-editing is something I often do when I use automatic […]

Read more

Machine Translation Weekly 67: Where the language neurality of mBERT reside?

If someone told me ten years ago when I was a freshly graduated bachelor of computer science that there would models that would produce multilingual sentence representation allowing zero-shot model transfer, I would have hardly believed such a prediction. If they added that the models would be total black boxes and we would not know why it worked, I would think they were insane. After all, one of the goals of the mathematization of stuff in science is to make […]

Read more
1 5 6 7 8 9 10