An open API service indexing awesome lists of open source software.

https://github.com/aaronpeikert/pac-talk


https://github.com/aaronpeikert/pac-talk

Last synced: 2 months ago
JSON representation

Awesome Lists containing this project

README

        

---
output: github_document
---

```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```

# Preregistration as Code

This is a talk about Pregistration as Code (PAC).
Here are [the slides](https://github.com/aaronpeikert/pac-talk/files/7719895/Tools.for.Testing.Theories.pdf).

First, we must understand why preregistration is supposed to work, than we can understand why PAC is working better than verbal preregistration.

The probability of observing the evidence given the hypothesis, the so called "theoretical risk", is central to judging a hypothesis.
Statistical models maintain a certain "theoretical risk" (through alpha error, degrees of freedom etc.), but they provide only a lower bound.
Any action of a researcher that is not statistically well defined must increase it.
If we do not know what a researcher did, we have great uncertainty about the theoretical risk they took.
This uncertainty about the theoretical risk does us a disservice, therefore decreasing the evidence boost.

Preregistration should remove the uncertainty about the theoretical risk, therefore increasing the evidence boost.
PAC is from this viewpoint the ideal preregistration. You can find out more about PAC in the second part of the paper ["Reproducible research in R: A tutorial on how to do the same thing more than once"](https://www.mdpi.com/2624-8611/3/4/53).