10 ML & NLP Research Highlights of 2019

This post gathers ten ML and NLP research directions that I found exciting and impactful in 2019.

For each highlight, I summarise the main advances that took place this year, briefly state why I think it is important, and provide a short outlook to the future.

The full list of highlights is here:

  1. Universal unsupervised pretraining
  2. Lottery tickets
  3. The Neural Tangent Kernel
  4. Unsupervised multilingual learning
  5. More robust benchmarks
  6. ML and NLP for science
  7. Fixing decoding errors in NLG
  8. Augmenting pretrained models
  9. Efficient and long-range Transformers
  10. More reliable analysis methods

What happened?  Unsupervised pretraining was prevalent in NLP this year, mainly driven by BERT (Devlin et al.,

 

 

 

To finish reading, please visit source site