A Gentle Introduction to the Bayes Optimal Classifier

Last Updated on August 19, 2020

The Bayes Optimal Classifier is a probabilistic model that makes the most probable prediction for a new example.

It is described using the Bayes Theorem that provides a principled way for calculating a conditional probability. It is also closely related to the Maximum a Posteriori: a probabilistic framework referred to as MAP that finds the most probable hypothesis for a training dataset.

In practice, the Bayes Optimal Classifier is computationally expensive, if not intractable to calculate, and instead, simplifications such as the Gibbs algorithm and Naive Bayes can be used to approximate the outcome.

In this post, you will discover Bayes Optimal Classifier for making the most accurate predictions for new instances of data.

After reading this post, you will know:

  • Bayes Theorem provides a principled way for calculating conditional probabilities, called a posterior probability.
  • Maximum a Posteriori is a probabilistic framework that finds the most probable hypothesis that describes the training dataset.
  • Bayes Optimal Classifier is a probabilistic model that finds the most probable prediction using the training data and space of hypotheses to make a prediction for a new data instance.

Kick-start your project with my new book Probability
To finish reading, please visit source site