Highlights from Machine Translation and Multilinguality in November 2022

Here are my monthly highlights from paper machine translation and
multilinguality that appeared on arXiv in November 2022.

A preprint with 19 authors from 13
institutions
presents something like the T0
model: but instead of starting with the (more or less) monolingual T5 model,
they use multilingual BLOOM and mT5 and call the resulting model BLOOMZ and
mT0. The main idea is finetuning the underlying model (or the foundation
model?) on as many tasks as possible so that the model learns that it will be
used to solve different sorts of stuff instead of language modeling. It seems
to work well for most tasks. Machine-translating training data of the
adaptation/first-finetuning/task training worked well.

A preprint with authors from three Beijing
institutions
shows interesting work on
improving the language

 

 

To finish reading, please visit source site

Leave a Reply