Background#
Bayesian inference#
In the Bayesian context, one is often interested to approximate the posterior distribution \(\mathcal{P}(\theta)\equiv p(\theta\vert d,\mathcal{M})\), that is, the probability distribution of the parameters \(\theta\) given the data \(d\) and the model \(\mathcal{M}\). This is given by Bayes’ theorem:
where
is the likelihood function,
is the prior probability density, and
is the so called model evidence or marginal likelihood.
Parameter estimation#
The task of parameter estimation consists of finding the probability distribution of the parameters \(\theta\) of a model \(\mathcal{M}\) given some data \(d\). In practice this is achieved by approximating the posterior distribution by a collection of samples. The distribution of these samples can then be used to approximate various expectation values (e.g. mean, median, standard deviation, credible intervals, 1-D and 2-D marginal posteriors etc.)
as sums over the samples drawn from the posterior
Model comparison#
For the task of Bayesian model comparison, one is interested in the ratio of posterior probabilities of models \(\mathcal{M}_{i}\) and \(\mathcal{M}_{j}\), given by
where the first term on the right-hand-side is the so called Bayes factor and the second term is the ratio of prior probabilities of the two models. The latter is often set to 1 (i.e. no model is preferred a priori). The Bayes factor on the other hand is simply the ratio of the model evidences of the two models, or