Machine Translation Weekly 95: Minimum Bayes Risk Decoding – the Cooler the Metric, the Cooler it gets

This week I am returning to a topic that I follow with fascination (cf. MT
Weekly #20,
#61,
#63, and
#66) without actually doing any
research myself – decoding in machine learning models. The preprint I will
discuss today comes from Google Research and has the title Minimum Bayes Risk
Decoding with Neural Metrics of Translation
Quality
. It shows that Minimum Bayes
Risk (MBR) decoding can outperform beam search when done properly and that
there might be some serious problems in how encoder-decoder-based MT is
formalized.

In neural machine translation, we think (and tell students and tell each other)
that we model the probability of the target sentence given the source sentence
factorized over target words. With such a model, it makes total sense to

 

 

To finish reading, please visit source site

Leave a Reply