Detecting p-hacking

16 Jun 2019  ·  Graham Elliott, Nikolay Kudrin, Kaspar Wuthrich ·

We theoretically analyze the problem of testing for $p$-hacking based on distributions of $p$-values across multiple studies. We provide general results for when such distributions have testable restrictions (are non-increasing) under the null of no $p$-hacking. We find novel additional testable restrictions for $p$-values based on $t$-tests. Specifically, the shape of the power functions results in both complete monotonicity as well as bounds on the distribution of $p$-values. These testable restrictions result in more powerful tests for the null hypothesis of no $p$-hacking. When there is also publication bias, our tests are joint tests for $p$-hacking and publication bias. A reanalysis of two prominent datasets shows the usefulness of our new tests.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here