Issue #109 – COMET- the Crosslingual Optimised Metric for Evaluation of Translation

26 Nov20

Issue #109 – COMET- the Crosslingual Optimised Metric for Evaluation of Translation

Author: Dr. Karin Sim, Machine Translation Scientist @ Iconic

Introduction

In today’s blog post we take a look at COMET, one of the frontrunners this year at the annual WMT metrics competition (when looking across all language pairs) (Mathur et al., 2020).

Historically, Machine Translation (MT) quality is evaluated by comparing the MT output with a human translated reference, and using metrics which increasingly are becoming outdated (Rei et al., 2020) (see also our recent blog posts #106, #104, and #99 on evaluation).

Rei et al. (2020) point out that the two major challenges highlighted last year at WMT19 were a failure to accurately correlate

 

 

To finish reading, please visit source site