Python for NLP: Working with Facebook FastText Library

python_tutorials

This is the 20th article in my series of articles on Python for NLP. In the last few articles, we have been exploring deep learning techniques to perform a variety of machine learning tasks, and you should also be familiar with the concept of word embeddings. Word embeddings is a way to convert textual information into numeric form, which in turn can be used as input to statistical algorithms. In my article on word embeddings, I explained how we can create our own word embeddings and how we can use built-in word embeddings such as GloVe.

In this article, we are going to study FastText which is another extremely useful module for word embedding and text classification. FastText has been developed by Facebook and has shown excellent results on many NLP problems, such as semantic similarity detection and text classification.

In this article, we will briefly explore the FastText library. This article is divided into two sections. In the first section, we will see how FastText library creates vector representations that can be used to find semantic similarities between the words. In the second section, we will see the application of

To finish reading, please visit source site