Part 7: Step by Step Guide to Master NLP – Word Embedding in Detail

This article was published as a part of the Data Science Blogathon

Introduction

This article is part of an ongoing blog series on Natural Language Processing (NLP). In the previous articles (part-5 and 6), we completed the different text vectorization and word embeddings techniques in detail. In this article, firstly we will discuss the co-occurrence matrix, which is also a word vectorization technique and after that, we will be discussing new concepts related to the Word embedding that includes,

  • Applications of Word Embeddings,
  • Word Embedding use-cases,
  • Implementation of word embedding using a pre-trained model and also from scratch.

This is part-7 of the blog series on the Step by Step Guide to Natural Language Processing.

 

Table of Contents

1. Distributional Similarity-based Word Representations

2. Co-occurrence matrix with a fixed context window

  • Different variations of the co-occurrence matrix
  • Advantages and disadvantages of the co-occurrence matrix

3. Applications of Word Embedding

4. Word Embedding use case Scenarios

5. Word embedding using Pre-trained Word Vectors

6. Training your own Word

 

 

 

To finish reading, please visit source site