SciCombinator

Discover the most talked about and latest scientific content & concepts.

Abstract
Much has been written regarding p-values below certain thresholds (most notably 0.05) denoting statistical significance and the tendency of such p-values to be more readily publishable in peer-reviewed journals. Intuition suggests that there may be a tendency to manipulate statistical analyses to push a “near significant p-value” to a level that is considered significant. This article presents a method for detecting the presence of such manipulation (herein called “fiddling”) in a distribution of p-values from independent studies. Simulations are used to illustrate the properties of the method. The results suggest that the method has low type I error and that power approaches acceptable levels as the number of p-values being studied approaches 1000.
Tweets*
103
Facebook likes*
2
Reddit*
2
News coverage*
0
Blogs*
2
SC clicks
65
Concepts
Effect size, Ronald Fisher, Academic publishing, Scientific method, P-value, Statistical hypothesis testing, Statistical significance, Statistics
MeSH headings
-
comments powered by Disqus

* Data courtesy of Altmetric.com