Machine Translation Weekly 77: Reference-free Evaluation

This week, I am will comment on a paper by authors from the University of
Maryland and Google Research on reference-free evaluation of machine
translation, which seems quite disturbing to me and suggests there is a lot
about current MT models we still don’t quite understand. The title of the paper
is “Assessing Reference-Free Peer Evaluation for Machine
Translation”
and it will be published at
this year’s NAACL conference.

The standard evaluation of machine translation uses reference translations:
translations that were produced by humans and that we believe are of high
quality (although there could be a very long discussion about what high quality
in this context means). Machine translation systems are evaluated by measuring
the similarity of their outputs with these high-quality reference translations.
The adequacy of

 

 

To finish reading, please visit source site

Leave a Reply