Today we will look at some common distributions used for Bayesian inference.
Beta
The first distribution we will look at is the Beta distribution. The beta distribution is equal to: [B(a,b)]-1πa-1(1-π)b-1. We can show that:
- If the prior ~ πa(1-π)b
- And likelihood ~ πS(1-π)F
- Then the posterior ~ πa+S(1-π)b+F
Where ‘~’ denotes equal except for a constant or proportional to. If:
- S* = a + S + 1
- F* = b + F + 1
Then
- n* = S* + F*
- P* = S*/n*
We can use P* and n* just like we would p and n in a classical binomial framework. P* is our expected mean and the Bayesian confidence intervals can be calculated as follows.
- Conf. Int.: P* +/- (tα/2)*[P*(1-P*)/n*]1/2
Normal Distribution
With the normal distribution, we will again have a prior normal distribution and a likelihood function. The question is, which should we rely on more in creating our posterior distribution: our prior assumptions or the data collected.
Let the variance of the prior be σ20 and the variance of the sample be σ2. Also let μ0 be the prior mean and X* be the sample mean. If ‘n‘ is the number of observations in the sample, we can calculate the posterior mean as:
- Posterior mean = (n0μ0+ nX*)/(n0 + n)
- where: n0 = (σ2)/(σ20)
We can see that the prior mean is more important when the prior’s variance is small relative to the sample variance. On the other hand, the sample mean is given more wieght when the sample variance is small relative to the prior variance. We can calculate the posterior standard error as:
- Posterior S.E. = σ/(n0 + n)1/2