Issue #116 – Fully Non-autoregressive Neural Machine Translation

04 Feb21 Issue #116 – Fully Non-autoregressive Neural Machine Translation Author: Dr. Patrik Lambert, Senior Machine Translation Scientist @ Iconic Introduction The standard Transformer model is autoregressive (AT), which means that the prediction of each target word is based on the predictions for the previous words. The output is generated from left to right, a process which cannot be parallelised because the prediction probability of a token depends on previous tokens. In the last few years, new approaches have been […]

Read more

Issue #105 – Improving Non-autoregressive Neural Machine Translation with Monolingual Data

30 Oct20 Issue #105 – Improving Non-autoregressive Neural Machine Translation with Monolingual Data Author: Dr. Chao-Hong Liu, Machine Translation Scientist @ Iconic Introduction In the training of neural machine translation (NMT) systems, determining how to take advantage of monolingual data and improve the performance of the resulting trained models is a challenge. In this post, we review an approach proposed by Zhou and Keung (2020), under the framework of non-autoregressive (NAR) NMT. The results confirm that NAR models achieve better […]

Read more