I've been wanting to share a few thoughts about meta-analysis for a while, and am relieved to finally have some spare moments here and there where I can do that for you all. I thought that might be somewhat useful as I both consume and produce meta-analyses. I have been involved in two meta-analyses that have been published (my most recent as the primary author is currently in press) and one that is currently in progress. I have an idea of the positives and negatives involved with meta-analysis that is based very much on those hands-on experiences.
I will start off be simply stating the obvious: our field has a serious publication bias problem. The causes of that problem are multiple and need not be litigated here. The bottom line remains, though, that a sample of studies in a meta-analysis is just that - a sample. We honestly do not have access to the population of studies testing a particular hypothesis, and hence any effect size estimate we calculate should not be taken at face value. So, some means of estimating publication bias effects (these techniques fall under the broad umbrella of sensitivity analyses) is necessary to give us a better understanding of the literature we are studying. The problem is that right now we do not seem to be in agreement over what techniques of estimating publication bias should be utilized. Well, I do think we are in agreement that the fail-safe N can be safely dispensed with. Beyond that, disagreements abound. For much of this century, trim-and-fill analyses and funnel plots have been the gold standard. However, that set of techniques has come under attack for underestimating the effects of publication bias, leading meta-analysts to be a bit too sanguine in their conclusions. A variety of techniques have been offered as alternatives - many of which have only been in use for a handful of years (PET-PEESE and p-curves come to mind). Meta-analyses utilizing these techniques lead to different conclusions than those using trim-and-fill analyses. The two meta-analyses recently published on ego depletion, based on the same sample of studies, come immediately to mind. Depending on whether one reads the meta in which the authors rely on trim-and-fill analyses or the one in which the other set of authors rely on PET-PEESE, one may draw different conclusions about the state of ego depletion research. The former may come to the conclusion that ego depletion research is alive and well, whereas the latter may come to the conclusion that ego depletion research has been thoroughly debunked. Although the authors of those meta-analyses appear to me to be working in good faith, it would not take too much to imagine meta-analysts who have specific axes to grind specifically choosing their favored sensitivity analysis techniques based on their severity, and hence based on whether the techniques in question will "support" or "debunk" a particular body of research. That worries me a great deal. For now I am recommending utilizing a battery of techniques to estimate publication bias (including trim-and-fill and PET-PEESE) and examining the extent to which these techniques appear to triangulate around a "true" effect size for a particular distribution. My reason for doing so is largely based on the ideal of maintaining the sort of objectivity that meta-analysis promised when it was introduced as an alternative to the narrative literature review. Anything short of that would place us back in the same position we were in back in the days when narrative reviews were the only way of assessing a literature: highly subjective and based upon the whims of the reviewer or reviewers. That is not a road we want to travel.
No comments:
Post a Comment