Thursday, 24 November 2011

Introduction to Bayesian inference

  Manoj       Thursday, 24 November 2011
Bayesian inference is one of two dominant approaches to statistical inference. The word "Bayesian" refers to the influence of Reverend Thomas Bayes, who introduced what is now known as Bayes' theorem. Bayesian inference was developed prior to what is incorrectly called classical statistics, which is more appropriately referred to as frequentest inference. Bayesian inference is a modern revival of the classical definition of probability, associated with Pierre-Simon Laplace, in contrast to the frequentest definition of probability, most often associated with R. A. Fisher.
Bayesian analysis
         A decision-making analysis that '…permits the calculation of the probability that one treatment is superior based on the observed data and prior beliefs…subjectivity of beliefs is not a liability, but rather explicitly allows different opinions to be formally expressed and evaluated.' See Algorithm, Critical pathway, Decision analysis.
         In statistics, Bayesian inference is a method of statistical inference in which evidence is used to update the uncertainties of competing probability models. Bayesian inference is often used in science and engineering to determine model parameters, make predictions about unknown variables, and to perform model selection.
In the Bayesian interpretation of probability, probability measures confidence that something is true, and may be termed uncertainty, confidence or belief. Suppose there is a process generating events with unknown probabilities. The state of belief concerning this process is the set of possible probability models and corresponding uncertainties. The uncertainties are subjective, but always sum to 1. When events are freshly observed, these may be compared to those predicted by each model and the uncertainties updated. A Bayesian inference step uses Bayes' theorem to calculate the updated uncertainties. For each uncertainty, the initial value is called the prior, while the updated value is called the posterior. Typically, as steps occur, the uncertainty of one model tends to 1 while that of the rest tend to 0.
To determine the parameter of a model M(\theta  \in \Theta), the state of belief may be defined over the set of models \{M(\theta) : \theta \in \Theta \}. After performing Bayesian inference, a point estimate of θ may be made - typically as the most likely value of θ, or the expectation of θ.
In Bayesian model selection, the uncertainty of different models is compared as inference steps occur. For further details of the use of Bayesian inference in model selection, see Bayesian model selection.
Some Bayesian inference PDF:
logoblog

Thanks for reading Introduction to Bayesian inference

Previous
« Prev Post

No comments:

Post a Comment