Sunday, August 5, 2007

Statistical Learning - Bayesian Logic

Bayesian logic


Named for Thomas Bayes, an English clergyman and mathematician, Bayesian logic is a branch of logic applied to decision making and inferential statistics that deals with probability inference: using the knowledge of prior events to predict future events.

Bayes' Theorem is a means of quantifying uncertainty. Based on probability theory, the theorem defines a rule for refining an hypothesis by factoring in additional evidence and background information, and leads to a number representing the degree of probability that the hypothesis is true. To demonstrate an application of Bayes' Theorem, suppose that we have a covered basket that contains three balls, each of which may be green or red. In a blind test, we reach in and pull out a red ball. We return the ball to the basket and try again, again pulling out a red ball. Once more, we return the ball to the basket and pull a ball out - red again. We form a hypothesis that all the balls are all, in fact, red. Bayes' Theorem can be used to calculate the probability (p) that all the balls are red (an event labeled as "A") given (symbolized as "|") that all the selections have been red (an event labeled as "B"):

p(A|B) = p{A + B}/p{B}

Of all the possible combinations (RRR, RRG, RGG, GGG), the chance that all the balls are red is 1/4; in 1/8 of all possible outcomes, all the balls are red AND all the selections are red. Bayes' Theorem calculates the probability that all the balls in the basket are red, given that all the selections have been red as .5 (probabilities are expressed as numbers between 0. and 1., with "1." indicating 100% probability and "0." indicating zero probability).

No comments: