The essence of Bayesian methods is a mathematical rule explaining how you should change your existing beliefs in the light of new evidence. It allows people to combine new data with their existing knowledge or expertise. In Bayesian statistics, probability represents an individual’s degree of belief that a particular event will occur. Thus the Bayesian approach is based on personal or subjective probabilities. Central to Bayesian methods is the use of Bayes’ Theorem, which states that if A and B are events and P(B), the probability of event B, is greater than zero, then:
P (A/B) = (P (A).P (A/B))/P (B)
The theorem is most commonly interpreted such that A represents a model that we think might be true (in frequentist terms, a Hypothesis) and B represents our observations. In this case:
• The prior estimate of probability, P(A), is our initial belief about the probability of A being true.
• The posterior estimate P(A|B) is the probability of A being true given that B has been observed.
• The likelihood factor, P(B|A) is the probability of event B occurring if A is true.
• Finally we consider a range of possible A’s so we can calculate P(B), the total probability of B happening for any A. Since B did happen, we need to divide by P(B) to normalize the answer. In frequentist inference, tests of significance are performed by supposing that one particular hypothesis, the null hypothesis, is true, and then computing the probability of observing a statistic at least as extreme as the one actually observed during hypothetical future repeated trials (this is the P-value). In other words, frequentist statistics examine the probability of the data given a model (hypothesis). By contrast, Bayesian statistics allows us to examine the probability that a possible model is true, given the available data. It also allows us to compare the different possible models and assess their relative probabilities of being true.
No comments:
Post a Comment