Machine Translation Weekly 93: Notes from EMNLP 2021

Another big NLP conference is over and here are my notes about the paper that I
liked the most. My general impression was sort of similar to what I got from
ACL this year. It seems to me
that the field is progressing towards some behavioral understanding of what the
neural models do, which allows doing some cool tricks that it was hardly
possible to think of, only a few years ago. Excellent examples are tricks with
adapters or non-parametric methods for language modeling and MT. All of it is
sort of procedural knowledge – recipes on how to make things work well –
surprisingly, it does not help for making NLP methods more explainable, and work
on model interpretability does seem to help much either (although most of

 

 

To finish reading, please visit source site

Leave a Reply