top of page

What is Bayesian Learning Concept ?

Bayesian Learning is a concept of probabilistic machine learning derived from the Bayes Theorem. The goal of machine learning is determining best hypothesis from the hypotheses space based on training data.


In Bayesian Learning, best hypothesis could be interpreted as the most probable hypothesis given the training data and prior probabilities in hypothesis space. Bayes Theorem calculates the probability of a hypothesis based on prior probabilities, probabilities of observing various data, and the observed data itself. Bayes Theorem can be mathematically modelled as the following:

�(ℎ) represents the initial probability that the hypothesis h is true (prior probability) which reflects any background knowledge about the probability itself. 𝑃(𝐷) represents the probability of data D being observed or the likelihood of the data.


𝑃(𝐷|ℎ) denotes the probability of data D being observed given hypothesis h is true (or simply probability of D given h), and 𝑃(ℎ|𝐷) denotes the initial probability that hypothesis h is true given the training data D (simply probability of h given D).


From this formula, we could retrieve every probability from hypothesis space given the data and determine one hypothesis with the highest probability. This hypothesis is called the maximum a posteriori (MAP) Hypothesis.


One practical algorithm for classification that implements Bayes Theorem is Naïve Bayes Classifier. This algorithm estimates the target value based on probabilities of its attributes that occurs, given its class and the probabilities of the target values’ occurrences.


The target value is chosen based on which class has the highest probability of occurring given the attributes. Mathematically, Naïve Bayes Classifier is modeled as follows:

Since Naïve Bayes Classifier chooses the target value with the highest probability gave the attributes occurs, it applies the Maximum-A-Posteriori hypothesis to calculate the 𝑣𝑀𝐴𝑃 since the estimating 𝑃(𝑎1, 𝑎2, … , 𝑎𝑛|𝑣𝑗) and 𝑃(𝑣𝑗) is possible. Based on the MAP hypotheses we can adjust the Naive Bayes Classifier.

Moreover, naïve assumption applies in Naïve Bayes Classifier: every attribute value is conditionally independent with each other. Naïve Bayes Classifier is applicable for either discrete-valued (Multinomial Naïve Bayes), Boolean-valued (Bernoulli Naïve Bayes), or continuous-valued (Gaussian Naïve Bayes) target functions.

3 views0 comments

留言


bottom of page