Naive Bayes

The idea behind Naive Bayes is simple - rely on your seen examples to make predictions, and update your belief based on new results!

Intuition

Rely on the past to make predictions about the future

The key idea with Bayesian approaches is probabilistic inference. This means that you start off with a set of parameters for your model (these can be anything or educated guesses). Then, you continually update your belief based on incoming data samples.

Bayes' Theorem

At first glance, Bayes' Theorem may seem like an ordinary probability theorem. However, this rule is the basis of the Bayesian approach to Machine Learning. To apply Bayes' rule to our classifiers, you can think of B as representing our evidence, and A as representing our hypothesis. Thus, P(A | B) represents our posterior, or our updated belief, and P(B | A) represents our likelihood function - i.e. how likely is our evidence given our hypothesis. Finally, P(A) is the prior function, which is updated on every iteration.

Naive Bayes

The biggest difference from above that is that we assume that each feature (x_1.. x_n) is independent of the others. Thus, we can simplify our likelihood to simply multiply all of the conditional feature probabilities. This gives us the following:

Applications of Naive Bayes'

Naive Bayes classifiers are great for various different NLP tasks, like spam filtering. While words tend to have conditional dependencies (grammar, etc.), the Naive Bayes assumption still holds with great success on these problems.

Quick Example!

Last updated