Issue #34 – Non-Parametric Domain Adaptation for Neural MT

25 Apr19

Issue #34 – Non-Parametric Domain Adaptation for Neural MT

Author: Raj Patel, Machine Translation Scientist @ Iconic

In a few of our earlier posts (Issues #9 and #19) we discussed the topic of domain adaptation – the process of developing and adapting machine translation engines for specific industries, content types, and use cases – in the context of Neural MT. In general, domain adaptation methods require retraining of neural models, using in-domain data or infusing domain information at the sentence level.  In this post, we’ll discuss the recent developments in updating models as they translate, so-called on-the-fly adaptation.

Neural MT and on-the-fly adaptation

Approaches to MT can be categorised as parametric (Neural MT) and non-parametric (Statistical MT). Though generally producing better output, parametric models are known to be occasionally forgetful, i.e. they may not use information seen in the training data (caused by a parameter shift during training). On the other hand, for all of their faults, non-parametric models do not have this forgetfulness.  Thus, if there was a way to combine the two, we could have the best of both worlds – generalisation ability, and robustness.  There have been attempts to combine non-parametric
To finish reading, please visit source site

Leave a Reply