Back to All Events

IMPS


  • Columbia University New York (map)

Jelte Wicherts, Robbie van Aert, and Esther Maassen will present at the IMPS conference, in New York.

Novel approaches to dealing with heterogeneity in meta-analyses

Symposium submission IMPS 2018 New York

Organized by J.M. Wicherts, Tilburg University, j.m.wicherts@uvt.nl

Meta-analyses are increasingly being used to collate evidence from research lines in psychology and other fields. The goals of these meta-analyses are to estimate mean effects, heterogeneity, and potential moderation of effects due to study-level characteristics. This symposium consists of four talks that bear on the key issue of heterogeneity in meta-analyses. Each talk approaches heterogeneity in meta-analysis from different perspectives, varying from methodological issues (coding errors), to statistical approaches to modelling publication bias, structural equation models, and analyses of multiple moderators. Specifically, the talks deal with heterogeneity that is possibly due to coding errors and outliers (Maassen), consider heterogeneity as implemented in the novel tool p-uniform (van Aert), deal with heterogeneity when meta-analyzing structural equation models (meta-analytic SEM; Jak), and explain heterogeneity with multiple moderators in meta-analysis (Li, Dusseldorp, & Meulman). In the first talk, Esther Maassen selected a random sample of meta-analyses from the psychological literature and re-computed the effect sizes to study reproducibility of results, to see how meta-analysts deal with heterogeneity, and to determine whether coding errors and outliers affect heterogeneity estimates.  In the second talk, Robbie van Aert will present a new random effects version of the method of p-uniform, that was developed to correct for publication bias under heterogeneity. In the third talk, Suzanne Jak will present a new maximum likelihood based method to deal with heterogeneity of meta-analytic structural equation models. In the fourth talk, Xinru Lin and her colleagues will present a new flexible R package called metaCART for meta-analysis that deals with multiple moderators and potential interactions between these moderators. We will end with a general discussion opened by Wicherts.

Correcting for publication bias in a meta-analysis with p-uniform*

Robbie van Aert – Tilburg University, R.C.M.vanAert@uvt.nl

Meta-analysis is now seen as the “gold standard” for synthesizing evidence from multiple studies. However, a major threat to the validity of a meta-analysis is publication bias that refers to situations where the published literature is not a representative reflection of the population of completed studies. In its most extreme case this implies that studies with statistically significant results get published and studies with statistically nonsignificant results do not get published. A consequence of publication bias is that the meta-analytic effect size is overestimated. The p-uniform method is a meta-analysis method that corrects estimates for publication bias, but the method overestimates average effect size in the presence of heterogeneity in primary study’s true effect sizes (i.e., between-study variance). We propose an extended and improvement of the p-uniform method called p-uniform*. This new p-uniform* method is an improvement in three important ways, because it (i) is a more efficient estimator, (ii) eliminates the overestimation of effect size in case of between-study variance in true effect sizes, (iii) enables estimating and testing for the presence of the between-study variance in true effect sizes. We will explain the p-uniform* method and discuss the results of an analytical study and Monte-Carlo simulation study where p-uniform* was compared to a selection model approach to correct for publication bias. We offer recommendations for correcting meta-analyses for publication bias in practice, and a R package as well as an easy-to-use web application for applying p-uniform*.  

Reproducibility of psychological meta-analyses: coding errors, outliers, and heterogeneity

Esther Maassen - Tilburg University, e.maassen@tilburguniversity.edu

Various studies have assessed the prevalence of reporting errors and inaccuracy of computations in psychological articles. These problems related to errors in primary studies extend to the level of meta-analyses in biomedical and health literature. The goal of this study was to systematically assess the reproducibility of psychological meta-analyses. To this end, we randomly selected meta- analyses from the psychological literature, and included 33 meta-analyses that reported effect sizes at the study level. We subsequently determined the prevalence of reporting or computational errors in 500 of the primary study effect sizes, and checked whether corrections of these effect sizes altered overall meta-analytic effect sizes, confidence intervals, and heterogeneity estimates. We documented how often we were unable to reproduce primary study effect sizes and the main meta-analytic outcomes. Additionally, we documented how meta-analysts dealt with issues related to heterogeneity, outlying primary studies, signs of publication bias, and possibly dependent effect sizes. Common issues in the reproducibility of meta-analyses pertain to the omission of necessary information on effect size computations and meta-analytic approaches. We present error rates and highlight the importance of using meta-analytic reporting standards and other practices that might help improve reproducibility of meta-analyses.