Issue #124 – Towards Enhancing Faithfulness for Neural MT

01 Apr21

Issue #124 – Towards Enhancing Faithfulness for Neural MT

Author: Dr. Karin Sim, Machine Translation Scientist @ Iconic

Introduction

While Neural Machine Translation is generally fluent, it occasionally can be deceptively so, either omitting or adding fragments. In today’s post we examine a method proposed to address this shortcoming and make the model more faithful to the source; Weng et al. (2020) propose a faithfulness-enhanced NMT model, called FENMT.

The Problem

They surmise that there are potentially 3 possible causes for this faithfulness problem in the encoder-decoder framework:

  1. Some parts of input are hard to encode and therefore not translated correctly.
  2. The decoder cannot retrieve the correct contextual representation from the encoder.
  3. In aiming for fluency, the language model encourages common words.

They then propose a novel training strategy

 

 

To finish reading, please visit source site