Media Attention for `statcheck`

Lately there has been quite some media attention for statcheck. In a piece in Nature, Monya Baker has written a thorough and nuanced overview of statcheck and the PubPeer project of Chris Hartgerink, in which he scanned 50,000 papers and posted the statcheck results on the online forum PubPeer. In the Nature editorial this type of post-publication peer review is discussed. Some other interesting coverage of statcheck can be found here:

  • Buranyi, S. (2016). Scientists are worried about `peer review by algorithm’. Motherboard (VICE)URL
  • Resnick, B. (2016). A bot crawled thousands of studies looking for simple math errors. The results are concerning. VoxURL
  • Kershner, K. (2016). Statcheck: when bots `correct’ academics. How Stuff Works URL
  • Keulemans, M. (2016). Worden sociale wetenschappen geterroriseerd door jonge onderzoekers?: Oorlog onder psychologen. De Volkskrant. URL

New Paper on Researcher's Intuitions About Statistical Power

Our team member Marjan Bakker has just published a paper in Psychological Science, together with Chris Hartgerink, Jelte Wicherts and Han van der Maas. The abstract: Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies.

The paper is available here (Open Access).

APS 2016 Chicago Presentations

The Meta-Research group was well represented at the APS conference in Chicago. As a recap, we have shared all our slides. Feel free to view them and let us know if you have any questions or suggestions! Where applicable, Open Science Framework links are included, which makes the presentations citable as well as preserves them.

The Psychology of Statistics and the Statistics of Psychology

Honesty and Trust in Psychology Research

How to Deal with Publication Bias in Psychology? Illustrations and Recommendations

To be added Paulette Flore

Preprint of a New Paper Comparing p-curve and p-uniform

Our team member Robbie van Aert recently got his paper accepted for publication in Perspectives on Psychological Science, together with Jelte Wicherts and Marcel van Assen. The abstract: Because evidence of publication bias in psychology is overwhelming, it is important to develop techniques that correct meta-analytic estimates for publication bias. Van Assen, Van Aert, and Wicherts (2015) and Simonsohn, Nelson, and Simmons (2014a) developed p-uniform and p-curve, respectively. The methodology on which these methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, we show that in some situations p-curve behaves erratically while p-uniform may yield implausible negative effect size estimates. Moreover, we show that (and explain why) p-curve and p-uniform overestimate effect size under moderate to large heterogeneity, and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analysis in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform (https://rvanaert.shinyapps.io/p-uniform).

An interesting read for anyone using these methods or interested in applying these methods! The paper will be published in a special issue on Methods and Practices.

Download the preprint here.