**Last updated:** 2019-03-31

**Checks:** 6 0

**Knit directory:** `fiveMinuteStats/analysis/`

This reproducible R Markdown analysis was created with workflowr (version 1.2.0). The *Report* tab describes the reproducibility checks that were applied when the results were created. The *Past versions* tab lists the development history.

`set.seed(12345)`

was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use `wflow_publish`

or `wflow_git_commit`

). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:

```
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/.Rhistory
Ignored: analysis/bernoulli_poisson_process_cache/
Untracked files:
Untracked: _workflowr.yml
Untracked: analysis/CI.Rmd
Untracked: analysis/gibbs_structure.Rmd
Untracked: analysis/libs/
Untracked: analysis/results.Rmd
Untracked: analysis/shiny/tester/
Untracked: docs/MH_intro_files/
Untracked: docs/citations.bib
Untracked: docs/hmm_files/
Untracked: docs/libs/
Untracked: docs/shiny/tester/
```

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.

These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see `?wflow_git_remote`

), click on the hyperlinks in the table below to view them.

File | Version | Author | Date | Message |
---|---|---|---|---|

html | d251967 | stephens999 | 2018-04-23 | Build site. |

Rmd | 1b4481e | Yue-Jiang | 2017-08-15 | minor, fix typo |

html | 34bcc51 | John Blischak | 2017-03-06 | Build site. |

Rmd | 5fbc8b5 | John Blischak | 2017-03-06 | Update workflowr project with wflow_update (version 0.4.0). |

Rmd | 391ba3c | John Blischak | 2017-03-06 | Remove front and end matter of non-standard templates. |

html | fb0f6e3 | stephens999 | 2017-03-03 | Merge pull request #33 from mdavy86/f/review |

html | c3b365a | John Blischak | 2017-01-02 | Build site. |

Rmd | 67a8575 | John Blischak | 2017-01-02 | Use external chunk to set knitr chunk options. |

Rmd | 5ec12c7 | John Blischak | 2017-01-02 | Use session-info chunk. |

Rmd | bd6aa11 | Nan Xiao | 2016-01-31 | formula typo fix |

Rmd | 93e619c | Nan Xiao | 2016-01-27 | LR_and_BF URL Fix |

Rmd | 9ef3240 | stephens999 | 2016-01-19 | small notation change |

Rmd | e5abac8 | stephens999 | 2016-01-19 | add exercise |

Rmd | bf76cbd | stephens999 | 2016-01-12 | add LR and BF |

This document introduces the idea that drawing conclusions from likelihood ratios (or Bayes Factors) from fully specified models is context dependent. In particular, it involves weighing the information in the data against the relative (prior) plausibility of the models.

You should understand the concept of using likelihood ratio for discrete data and continuous data to compare support for two fully specified models.

Recall that a fully specified model is one with no free parameters. So a fully specified model for \(X\) is simply a probability distribution \(p(x|M)\). And, given observed data \(X=x\) the Likelihood Ratio for comparing two fully specified models, \(M_1\) vs \(M_0\), is defined as the ratio of the probabilities of the observed data under each model:

\[LR(M_1,M_0) := p(x | M_1) / p(x | M_0).\]

For fully specified models, the likelihood ratio is also known as the “Bayes Factor” (BF), so we could also define the Bayes Factor for \(M_1\) vs \(M_0\) as \[BF(M_1,M_0) := p(x | M_1) / p(x | M_0).\] When comparing fully specified models the LR and BF are just two different names for the same thing.

In the example here we considered the problem of determining whether an elephant tusk came from a savanna elephant or a forest elephant, based on examining DNA data. Specifically we computed the LR (or BF) comparing two models, \(M_S\) and \(M_F\), where \(M_S\) denotes the model that the tusk came from a savanna elephant and \(M_F\) denotes the model that the tusk came from a forest elephant. In our example we found LR=1.8, so the data favor \(M_S\) by a factor of 1.8. We commented that a factor of 1.8 is relatively modest, and not sufficient to convince that the tusk definitely came from a savanna elephant.

In the example here we considered the problem of testing a patient for a disease based on measuring the concentration of a particular diagnostic protein in the blood. Specifically we computed the LR (or BF) comparing two models, \(M_n\) and \(M_d\), where \(M_S\) denotes the model that the patient is normal and \(M_d\) denotes the model that the patient has the disease. In our example we found LR for \(M_n\) vs \(M_d\) to be approximately 0.07, so the data favor \(M_d\) by a factor of 14. This is a much larger LR than in the first case, but is it convincing? Can we confidently conclude that the patient has the disease?

More generally, we would like to address the question: what value of the LR should we treat as

“convincing” evidence for one model vs another?

It is crucial to recognize that the answer to this question has to be context dependent. In particular, the extent to which we should be “convinced” by a particular LR value has to depend on the relative plausibility of the models we are comparing. For example, in statistics there are many situations where we want to compare models that are not equally plausible. Suppose that model \(M_1\) is much less plausible than \(M_0\). Then we must surely demand stronger evidence from the data (larger LR) to be “convinced” that it arose from model \(M_1\) rather than \(M_0\), than in contexts where \(M_1\) and \(M_0\) are equally plausible.

In the disease screening example, the interpretation depends on the frequency of the disease in the population being screened. For example, suppose that only 5% of people screened actually have the disease. Then to outweigh that fact, we would have to demand a high LR to “convince” us that a particular person has the disease. In contrast, if 50% of people screened have the disease then we might be convinced by a much smaller LR.

The following calculation formalizes this intuition. Suppose we are presented with a series of observations \(x_1,\dots,x_n\), some of which are generated from model \(M_0\) and the others of which are generated from model \(M_1\). Let \(Z_i\in {0,1}\) denote whether \(x_i\) was generated from model \(M_0\) or \(M_1\), and let \(\pi_j\) denote the proportion of the observations that came from model \(M_j\) (\(j=0,1\)). So in the disease screening example, \(\pi_1\) would be the proportion of screened individuals who have the disease. That is, \(\pi_j = \Pr(Z_i=j)\).

Bayes Theorem says that \[\Pr(Z_i = 1 | x_i) = \Pr(x_i | Z_i = 1) \Pr(Z_i=1)/ \Pr(x_i).\] Also, \[\Pr(x_i) = \Pr(x_i | Z_i=0)\Pr(Z_i=0) + \Pr(x_i | Z_i=1)\Pr(Z_i=1).\]

Putting these together, substituting \(\pi_j\) for \(\Pr(Z_i=j)\), and dividing numerator and denominator by \(\Pr(x_i | Z_i=0)\) yields:

\[\Pr(Z_i = 1 | x_i) = \pi_1 LR_i / (\pi_0 + \pi_1 LR_i)\]

where \(LR_i\) denotes the likelihood ratio for \(M_1\) vs \(M_0\) computed for observation \(x_i\).

Suppose that only 5% of screened people have the disease. Then a LR of 14 in favor of disease yields: \[\Pr(Z_i=1 | x_i) = 0.05*14/(0.95 + 0.05* 14)\] = `0.4242424`

.

In contrast, if 50% of screened people have the disease then the LR of 14 yields \[\Pr(Z_i=1 | x_i) = 0.5*14/(0.5 + 0.5* 14)\] = `0.9333333`

.

Thus in the first case, of patients with \(LR=14\), less than half would actually have the disease. In the second case, of patients with \(LR=14\), more than 90% would have the disease!

There is another way of laying out this kind of calculation, which may be slightly easier to interpret and remember, and also has the advantage of holding even when more than two models are under consideration. From Bayes theorem we have

\[\Pr(Z_i = 1 | x_i) = \Pr(x_i | Z_i = 1) \Pr(Z_i=1)/ \Pr(x_i).\]

and

\[\Pr(Z_i = 0 | x_i) = \Pr(x_i | Z_i = 0) \Pr(Z_i=0)/ \Pr(x_i).\]

Taking the ratio of these gives \[\Pr(Z_i = 1 | x_i)/\Pr(Z_i = 0 | x_i) = [\Pr(Z_i=1)/\Pr(Z_i=0)] \times [\Pr(x_i | Z_i = 1)/\Pr(x_i | Z_i = 0)].\]

This formula can be conveniently stated in words, using the notion of ``odds“, as follows: \[\text{Posterior Odds = Prior Odds x LR}\] or, recalling that the LR is sometimes referred to as the Bayes Factor (BF), we have \[\text{Posterior Odds = Prior Odds x BF}.\]

Note that the “Odds” of an event \(E_1\) vs an event \(E_2\) means the ratio of their probabilities. So \(\Pr(Z_i=1)/\Pr(Z_i=0)\) is the “Odds” of \(Z_i=1\) vs \(Z_i=0\). It is referred to as the “Prior Odds”, because it is the odds prior to seeing the data \(x\). Similarly the Posterior Odds refers to the Odds of \(Z_i=1\) vs \(Z_i=0\) “posterior to” (after) seeing the data \(x\).

Suppose that only 5% of screened people have the disease. Then the prior odds for disease is 1/19. And a LR of 14 in favor of disease yields a posterior odds of 14/19 (or “14 to 19”).

Suppose that 50% of screened people have the disease. Then the prior odds for disease is 1. And a LR of 14 in favor of disease yields a posterior odds of 14 (or “14 to 1”).

In cases where there are only two possibilities, as here, then the posterior odds determines the posterior probabilities.

- Write a function to simulate data for the medical screening example above. The function should take as input the proportion of individuals who have the disease, and the number of individuals to simulate. It should output a table, one row for each individual, with two columns: the first column (x) is the protein concentration for that individual, the second column (z) is an indicator of disease status (1=disease, 0=normal).

Write a function to compute the likelihood ratio in the medical screening example.

Use the above functions to answer the following question

*by simulation*. Suppose we screen a population of individuals, 20% of which are diseased, and compute the LR for each individual screened. Among individuals with an LR “near” c, what proportion are truly diseased? Denoting this proportion \(q_D(c)\), make a plot of \(\log_{10}(c)\) [\(x\) axis] vs \(q_D(c)\) [\(y\) axis], with \(c\) varying from \(1/10\) to \(10\) say (\(log_{10}(c)\) varies from -1 to 1.) Or maybe a wider range if you like (the wider the range, the larger the simulation study you will need to get reliable results).Use the computations introduced in this vignette to compute the theoretical value for \(q_D(c)\), and plot these on the same graph as your simulation results to demonstrate that your simulations match the theory. [It should provide a good agreement, provided your simulation is large enough.]

Repeat the above, but in the case where only 2% of the screened population are diseased.

`sessionInfo()`

```
R version 3.5.2 (2018-12-20)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Mojave 10.14.1
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] workflowr_1.2.0 Rcpp_1.0.0 digest_0.6.18 rprojroot_1.3-2
[5] backports_1.1.3 git2r_0.24.0 magrittr_1.5 evaluate_0.12
[9] stringi_1.2.4 fs_1.2.6 whisker_0.3-2 rmarkdown_1.11
[13] tools_3.5.2 stringr_1.3.1 glue_1.3.0 xfun_0.4
[17] yaml_2.2.0 compiler_3.5.2 htmltools_0.3.6 knitr_1.21
```

This site was created with R Markdown