Machine Translation Weekly 47: Notes from the ACL

In this extremely long post, I will not focus on one paper as I usually do, but
instead will show my brief, but still infinitely long notes from this year’s
ACL. Many people already commented on the virtual format of the conference. I
will spare you of that and rather talk about the content of the conference
including a list of short summaries of papers.

Focus on Evaluation

Many papers commented on how we evaluate our models and many of those papers
got awarded. This is great news! Evaluation (and especially the BLEU score in
machine translation) was the elephant in the NLP room for a very long time, but
most people just accepted the evaluation practice as rules of the game. (The
game of getting papers accepted.)

I think it has something to with how competitive the field has become in recent
years. The publication record is one

To finish reading, please visit source site

Leave a Reply