Unrestricted Use
CC BY
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
- Subject:
- Biology
- Life Science
- Material Type:
- Reading
- Provider:
- PLOS Biology
- Author:
- Andrew T. Kahn
- Luke Holman
- Megan L. Head
- Michael D. Jennions
- Rob Lanfear
- Date Added:
- 08/07/2020