Issue #115 – Revisiting Low-Resource Neural Machine Translation: A Case Study

28 Jan21

Issue #115 – Revisiting Low-Resource Neural Machine Translation: A Case Study

Author: Akshai Ramesh, Machine Translation Scientist @ Iconic

Introduction

Although deep neu­ral models produce state­-of­-the-­art results in many translation tasks, they are found to under­perform phrase-based statistical machine translation in resource ­poor conditions. The majority of research on low-resource neural machine translation (NMT) focuses on the exploitation of monolingual or parallel data involving other language pairs. There is notably less attention into the research of low-resource NMT without the use of auxiliary data.

In today’s blog post, we will look at the work of Sennrich and Zhang, 2019 that comes from the University of Edinburgh. This paper investigates the best practices for low-resource recurrent NMT models and shows that more efficient use

 

 

To finish reading, please visit source site