Issue #81 – Evaluating Human-Machine Parity in Language Translation: part 2

07 May20

Issue #81 – Evaluating Human-Machine Parity in Language Translation: part 2

Author: Dr. Sheila Castilho, Post-Doctoral Researcher @ ADAPT Research Centre

This is the second in a 2-part post addressing machine translation quality evaluation – an overarching topic regardless of the underlying algorithms. Following our own summary last week, this week we are delighted to have one of the paper’s authors, Dr. Sheila Castilho, give her take on the paper, their motivations for writing it, and where we go from here.

In the machine translation (MT) field, there is always great excitement and anticipation for each new wave of MT. In recent years, we have seen impressive claims by a few MT providers, such as Google (2016): “bridging the gap between human and machine translation [quality]”; Microsoft (2018): “achieved human parity” on news translation from Chinese to English; SDL (2018): “cracked” Russian-to-English NMT with “near perfect“ translation quality. The truth is that, not rarely, there is a great discrepancy between the high expectation of what MT should accomplish and what it is actually able to deliver.

Given the hype around NMT and the big claims that came with it, two independent studies (To finish reading, please visit source site

Leave a Reply