Journal Article
Research Support, Non-U.S. Gov't
Review
Add like
Add dislike
Add to saved papers

Publication bias in ecology and evolution: an empirical assessment using the 'trim and fill' method.

Recent reviews of specific topics, such as the relationship between male attractiveness to females and fluctuating asymmetry or attractiveness and the expression of secondary sexual characters, suggest that publication bias might be a problem in ecology and evolution. In these cases, there is a significant negative correlation between the sample size of published studies and the magnitude or strength of the research findings (formally the 'effect size'). If all studies that are conducted are equally likely to be published, irrespective of their findings, there should not be a directional relationship between effect size and sample size; only a decrease in the variance in effect size as sample size increases due to a reduction in sampling error. One interpretation of these reports of negative correlations is that studies with small sample sizes and weaker findings (smaller effect sizes) are less likely to be published. If the biological literature is systematically biased this could undermine the attempts of reviewers to summarise actual biology relationships by inflating estimates of average effect sizes. But how common is this problem? And does it really affect the general conclusions of'literature reviews? Here, we examine data sets of effect sizes extracted from 40 peer-reviewed, published meta-analyses. We estimate how many studies are missing using the newly developed 'trim and fill' method. This method uses asymmetry in plots of effect size against sample size ('funnel plots') to detect missing' studies. For random-effect models of meta-analysis 38% (15/40) of data sets had a significant number of 'missing' studies. After correcting for potential publication bias, 21 % (8/38) of weighted mean effects were no longer significantly greater than zero, and 15% (5/34) were no longer statistically robust when we used random-effects models in a weighted meta-analysis. The mean correlation between sample size and the magnitude of standardised effect size was also significantly negative (r(s) = -0.20, P < 0.0001). Individual correlations were significantly negative (P<0.10) in 35% (14/40) of cases. Publication bias may therefore affect the main conclusions of at least 15-21% of meta-analyses. We suggest that future literature reviews assess the robustness of their main conclusions by correcting for potential publication bias using the 'trim and fill' method.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app