**Last updated:** 2019-03-31

**Checks:** 2 0

**Knit directory:** `fiveMinuteStats/analysis/`

This reproducible R Markdown analysis was created with workflowr (version 1.2.0). The *Report* tab describes the reproducibility checks that were applied when the results were created. The *Past versions* tab lists the development history.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use `wflow_publish`

or `wflow_git_commit`

). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:

```
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/.Rhistory
Ignored: analysis/bernoulli_poisson_process_cache/
Untracked files:
Untracked: _workflowr.yml
Untracked: analysis/CI.Rmd
Untracked: analysis/gibbs_structure.Rmd
Untracked: analysis/libs/
Untracked: analysis/results.Rmd
Untracked: analysis/shiny/tester/
Untracked: docs/MH_intro_files/
Untracked: docs/citations.bib
Untracked: docs/figure/MH_intro.Rmd/
Untracked: docs/hmm_files/
Untracked: docs/libs/
Untracked: docs/shiny/tester/
```

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.

These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see `?wflow_git_remote`

), click on the hyperlinks in the table below to view them.

File | Version | Author | Date | Message |
---|---|---|---|---|

html | 34bcc51 | John Blischak | 2017-03-06 | Build site. |

Rmd | 5fbc8b5 | John Blischak | 2017-03-06 | Update workflowr project with wflow_update (version 0.4.0). |

html | fb0f6e3 | stephens999 | 2017-03-03 | Merge pull request #33 from mdavy86/f/review |

Rmd | d674141 | Marcus Davy | 2017-02-27 | typos, refs |

Rmd | 02d2d36 | stephens999 | 2017-02-20 | add shiny binomial example |

html | 02d2d36 | stephens999 | 2017-02-20 | add shiny binomial example |

html | 7bc1873 | stephens999 | 2017-01-28 | Build site. |

Rmd | 35d9a16 | stephens999 | 2017-01-28 | Files commited by wflow_commit. |

html | 5d88119 | stephens999 | 2017-01-25 | Build site. |

Rmd | b48dd9c | stephens999 | 2017-01-25 | Files commited by wflow_commit. |

This vignette illustrates how to perform Bayesian inference for a continuous parameter, specifically a binomial proportion. Specifically it illustrates the mechanics of how we actually calculate the posterior distribution.

You should be familiar with the concepts of Likelihood function, and Bayesian inference for discrete random variables. You should also be familiar with the binomial distribution and the Beta distribution.

[Technical Note: to simplify this problem I have assumed that elephants are haploid, which they are not. If you do not know what this means you should simply ignore this comment.]

Suppose we sample 100 elephants from a population, and measure their DNA at a location in their genome (“locus”) where there are two types (“alleles”), which it is convenient to label 0 and 1.

In my sample, I observe that 30 of the elephants have the “1” allele and 70 have the “0” allele. What can I say about the frequency (\(q\) say) of the “1” allele in the population?

Here we are doing inference for a parameter \(q\) that can, in principle, take any value between 0 and 1. That is, we are doing inference for a “continuous” parameter. Bayesian inference for a continuous parameter proceeds in essentially exactly the same way as Bayesian inference for a discrete quantity, except that probability mass functions get replaced by densities.

Specifically remember the form of Bayes Theorem: \[\text{posterior} \propto \text{likelihood} \times \text{prior}.\] To apply this we need to have both the prior distribution and the likelihood.

Here the likelihood for \(q\) is \[L(q):= \Pr(D | q) \propto q^{30} (1-q)^{70}\] where \(D\) here denotes the data. This expression comes from the fact that the data consist of 30 “1” alleles (each of which occur with probability \(q\)) and 70 “0” alleles (each of which occur with probability \(1-q\)), and we assume that the samples are independent. (You might have heard this likelihood called the “binomial likelihood”, because it arises when the data come from a binomial distribution.)

Recall that the prior distribution is a distribution that is supposed to reflect what we know about \(q\) *prior* to seeing the data. For the purposes of illustration we will assume a uniform prior on \(q\): \(q \sim U[0,1]\). That is \[p(q) = 1 \qquad (q \in [0,1]).\]

This \(U[0,1]\) prior says many things. For example, it says that before seeing the data the idea that \(q<0.5\) is just as plausible as \(q>0.5\). And it says that \(q<0.1\) is just as plausible as \(q>0.9\), or \(0.4<q<0.5\). If for some reason these are not equally plausible then you should use a different prior, at least in principle. However, in practice it is sometimes (but not always) the case that the results of Bayesian inference are robust to the choice of prior distribution, so in such cases is common not to worry too much about minor deviations between what you believe and what the prior implies.

For now, we are simply aiming to show how the Bayesian calculations are done under this prior specification.

Using Bayes Theorem to combine the prior distribution and the likelihood we obtain: \[p(q | D) \propto p(D|q) p(q) = q^{30} (1-q)^{70} \qquad (q \in [0,1]).\]

Here, because \(q\) is a continuous parameter, this is referred to as the “posterior density”" for \(q\).

Now the final “trick” is to notice that this density, \(q^{30} (1-q)^{70}\) is exactly the density of a Beta distribution. Indeed, specifically it is the density of a Beta(31,71) distribution. So the posterior distribution for \(q\) is Beta(31,71), and we might write \(q | D \sim \text{Beta}(31,71)\).

This kind of “trick” is common in Bayesian inference: you look at the posterior density and “recognize” it as a distribution you know. It turns out that the number of distributions in common use is relatively small, so you only need to learn a few distributions to get sufficiently good at this trick for practical purposes. For example, it is a good start to be able to recognize the following distributions: exponential, binomial, Poisson, Gamma, Beta, Dirichlet, and Normal. If your posterior distribution does not look like one of these, then you may well be in a situation where you need to use computational methods (like Importance Sampling or Markov chain Monte Carlo) to do your computations.

So, in this case we are lucky: the posterior distribution is a nice distribution that we recognize, and this means we can do lots of calculations very easily: `R`

has lots of built-in functions to deal with the Beta distribution, and many analytic properties have been derived (e.g. Wikipedia.) We can use this to summarize and interpret the posterior distribution, as illustrated here.

To compute the posterior density of a continuous parameter, up to a normalizing constant, you multiply the likelihood by the prior density.

In simple cases you may find that the result is the density of a distribution you recognize. If so, you can often use known properties of that distribution to compute quantities of interest. Example.

In cases where you do not recognize the posterior distribution, you may need to use computational methods to compute quantities of interest.

This site was created with R Markdown