Practical Guide to Word Embedding System

This article was published as a part of the Data Science Blogathon

Pre-requisites

– Basic knowledge of Python

– Understanding of basics of NLP(Natural Language Processing)

 

Introduction

In natural language processing, word embedding is used for the representation of words for Text Analysis, in the form of a vector that performs the encoding of the meaning of the word such that the words which are closer in that vector space are expected to have similar in mean.

Consider, boy-men vs boy-apple. Can you tell which of the pairs has more similar words with each other???

For us, it’s obviously easy to understand the associations between words in a language. We know that boy and men have more similar meanings than boy and apple but what if we want machines to understand this kind of association automatically into our languages as well? Then what? Yeah!! That is what word embeddings come into play…

Agenda of

 

 

 

To finish reading, please visit source site