ML and NLP Research Highlights of 2020

The selection of areas and methods is heavily influenced by my own interests; the selected topics are biased towards representation and transfer learning and towards natural language processing (NLP). I tried to cover the papers that I was aware of but likely missed many relevant ones—feel free to highlight them in the comments below. In all, I discuss the following highlights:

  1. Scaling up—and down
  2. Retrieval augmentation
  3. Few-shot learning
  4. Contrastive learning
  5. Evaluation beyond accuracy
  6. Practical concerns of large LMs
  7. Multilinguality
  8. Image Transformers
  9. ML for science
  10. Reinforcement learning
Model sizes of language models from 2018–2020 (Credit: State of AI Report 2020)

What

 

 

 

To finish reading, please visit source site