BERT for Natural Language Inference simplified in Pytorch!

This article was published as a part of theĀ Data Science Blogathon

Introduction to BERT:

BERT stands for Bidirectional Encoder Representations from Transformers. It was introduced in 2018 by Google Researchers. BERT achieved state-of-art performance in most of the NLP tasks at that time and drawn the attention of the data science community worldwide.

It is extensively used today by data science practitioners for various NLP tasks. Details about the working of the BERT model can be found here.

Introduction to Natural Language Inference:

Natural Language Inference is a task in NLP where we are given two sentences namely premise and hypothesis. We have to predict whether the hypothesis given is True, False or not related with respect to Premise. We call it entailment for True, contradiction for False and neutral for undetermined or not related. Also, We can understand it with the following examples:

  1. Entailment: A person is riding a horse & A person is outdoor on a horse.
  2. Contradiction: A person is wearing blue

     

     

     

    To finish reading, please visit source site