Issue #20 – Dynamic Vocabulary in Neural MT

06 Dec18

Issue #20 – Dynamic Vocabulary in Neural MT

As has been covered a number of times in this series, Neural MT requires good data for training, and acquiring such data for new languages can be costly and not always feasible. One approach in Neural MT literature for improving translation quality for low-resource language is transfer-learning. A common practice is to reuse the model parameters (encoder, decoder, and word embeddings) of a high resource language and fine tune it to a specific domain or language. In this post, we take a look at a new concept of dynamic vocabulary, improving transfer-learning in Neural MT.

Transfer Learning in NMT

Transfer learning uses knowledge from a learned task to improve the performance on a related task, typically reducing the amount of required training data and reducing the training time. In Neural MT, research has shown promising results when a transfer-learning technique is applied to leverage existing models to cope with the scarcity of training data in specific domains or language settings. In a broader sense, pre-trained models have been successfully exploited and reported to improve the translation quality to a great extent.

Zoph et al. (2016) used a parent-child
To finish reading, please visit source site

Leave a Reply