A couple weeks ago, I mentioned Erin Bartram's blog post that explained the structural problems facing so many scholars, and which led her to say it was time to exit the field. Her simple blog post struck a nerve. You can now read an interview with her in The Chronicle of Higher Education. I would recommend doing so. The current state of the academic environment in the US is not sustainable, and we are losing out on some talent in the process.
The blog of Dr. Arlin James Benjamin, Jr., Social Psychologist
Friday, February 23, 2018
Thursday, February 22, 2018
A cautionary remark about meta-analyses
I've been wanting to share a few thoughts about meta-analysis for a while, and am relieved to finally have some spare moments here and there where I can do that for you all. I thought that might be somewhat useful as I both consume and produce meta-analyses. I have been involved in two meta-analyses that have been published (my most recent as the primary author is currently in press) and one that is currently in progress. I have an idea of the positives and negatives involved with meta-analysis that is based very much on those hands-on experiences.
I will start off be simply stating the obvious: our field has a serious publication bias problem. The causes of that problem are multiple and need not be litigated here. The bottom line remains, though, that a sample of studies in a meta-analysis is just that - a sample. We honestly do not have access to the population of studies testing a particular hypothesis, and hence any effect size estimate we calculate should not be taken at face value. So, some means of estimating publication bias effects (these techniques fall under the broad umbrella of sensitivity analyses) is necessary to give us a better understanding of the literature we are studying. The problem is that right now we do not seem to be in agreement over what techniques of estimating publication bias should be utilized. Well, I do think we are in agreement that the fail-safe N can be safely dispensed with. Beyond that, disagreements abound. For much of this century, trim-and-fill analyses and funnel plots have been the gold standard. However, that set of techniques has come under attack for underestimating the effects of publication bias, leading meta-analysts to be a bit too sanguine in their conclusions. A variety of techniques have been offered as alternatives - many of which have only been in use for a handful of years (PET-PEESE and p-curves come to mind). Meta-analyses utilizing these techniques lead to different conclusions than those using trim-and-fill analyses. The two meta-analyses recently published on ego depletion, based on the same sample of studies, come immediately to mind. Depending on whether one reads the meta in which the authors rely on trim-and-fill analyses or the one in which the other set of authors rely on PET-PEESE, one may draw different conclusions about the state of ego depletion research. The former may come to the conclusion that ego depletion research is alive and well, whereas the latter may come to the conclusion that ego depletion research has been thoroughly debunked. Although the authors of those meta-analyses appear to me to be working in good faith, it would not take too much to imagine meta-analysts who have specific axes to grind specifically choosing their favored sensitivity analysis techniques based on their severity, and hence based on whether the techniques in question will "support" or "debunk" a particular body of research. That worries me a great deal. For now I am recommending utilizing a battery of techniques to estimate publication bias (including trim-and-fill and PET-PEESE) and examining the extent to which these techniques appear to triangulate around a "true" effect size for a particular distribution. My reason for doing so is largely based on the ideal of maintaining the sort of objectivity that meta-analysis promised when it was introduced as an alternative to the narrative literature review. Anything short of that would place us back in the same position we were in back in the days when narrative reviews were the only way of assessing a literature: highly subjective and based upon the whims of the reviewer or reviewers. That is not a road we want to travel.
I will start off be simply stating the obvious: our field has a serious publication bias problem. The causes of that problem are multiple and need not be litigated here. The bottom line remains, though, that a sample of studies in a meta-analysis is just that - a sample. We honestly do not have access to the population of studies testing a particular hypothesis, and hence any effect size estimate we calculate should not be taken at face value. So, some means of estimating publication bias effects (these techniques fall under the broad umbrella of sensitivity analyses) is necessary to give us a better understanding of the literature we are studying. The problem is that right now we do not seem to be in agreement over what techniques of estimating publication bias should be utilized. Well, I do think we are in agreement that the fail-safe N can be safely dispensed with. Beyond that, disagreements abound. For much of this century, trim-and-fill analyses and funnel plots have been the gold standard. However, that set of techniques has come under attack for underestimating the effects of publication bias, leading meta-analysts to be a bit too sanguine in their conclusions. A variety of techniques have been offered as alternatives - many of which have only been in use for a handful of years (PET-PEESE and p-curves come to mind). Meta-analyses utilizing these techniques lead to different conclusions than those using trim-and-fill analyses. The two meta-analyses recently published on ego depletion, based on the same sample of studies, come immediately to mind. Depending on whether one reads the meta in which the authors rely on trim-and-fill analyses or the one in which the other set of authors rely on PET-PEESE, one may draw different conclusions about the state of ego depletion research. The former may come to the conclusion that ego depletion research is alive and well, whereas the latter may come to the conclusion that ego depletion research has been thoroughly debunked. Although the authors of those meta-analyses appear to me to be working in good faith, it would not take too much to imagine meta-analysts who have specific axes to grind specifically choosing their favored sensitivity analysis techniques based on their severity, and hence based on whether the techniques in question will "support" or "debunk" a particular body of research. That worries me a great deal. For now I am recommending utilizing a battery of techniques to estimate publication bias (including trim-and-fill and PET-PEESE) and examining the extent to which these techniques appear to triangulate around a "true" effect size for a particular distribution. My reason for doing so is largely based on the ideal of maintaining the sort of objectivity that meta-analysis promised when it was introduced as an alternative to the narrative literature review. Anything short of that would place us back in the same position we were in back in the days when narrative reviews were the only way of assessing a literature: highly subjective and based upon the whims of the reviewer or reviewers. That is not a road we want to travel.
Sunday, February 11, 2018
Worth your consideration
The Sublimated Grief of the Left Behind
I follow quite a number of scholars on Twitter. Periodically I see posts of what falls under the broad umbrella of quit lit retweeted. This post is a bit different, and I hope that her perspective offers some much needed food for thought. As someone who has experienced the loss of talented colleagues due to the circumstances the above author faces, this is a post that hit close enough to home to bear mentioning.
I follow quite a number of scholars on Twitter. Periodically I see posts of what falls under the broad umbrella of quit lit retweeted. This post is a bit different, and I hope that her perspective offers some much needed food for thought. As someone who has experienced the loss of talented colleagues due to the circumstances the above author faces, this is a post that hit close enough to home to bear mentioning.
Friday, February 2, 2018
Never treat a meta-analysis as the last word
I mentioned earlier that any individual meta-analysis should never be treated as the last word. Rather, it is best to treat a meta-analytic study as a tentative assessment of the state of a particular research literature at that particular moment. One obvious reason for my stance simply comes down to one of the available sample of studies testing a particular hypothesis at any given time. Presumably, over time, more studies that attempt to replicate the hypothesis test in question will be conducted and ideally reported. In addition, search engines are much better at detecting unpublished studies (what one of my mentors referred to as the "fugitive literature") than they once were. That's partially due to technological advances and partially due to individuals making their unpublished work (especially null findings) available for public consumption to a greater degree. To the extent that is the case, we would want to see periodic updated meta-analyses to account for these newer studies.
The second obvious reason is that meta-analysis itself is evolving. The techniques for synthesizing studies addressing a particular hypothesis are much more sophisticated than when I began my graduate studies, and are bound to continue to become more sophisticated going forward. The techniques for estimating mean effect sizes are more sophisticated, as are the techniques for estimating the impact of publication bias and outlier effects. If anything, recent meta-analyses are alerting us to what should have been obvious a long time ago: we have a real file drawer problem, and the failure to publish null findings or findings that are considered no longer "interesting" is leading us to have a more rose-colored view of our various research literatures than is warranted. Having said that, it is also very obvious that since we cannot quite agree among ourselves as to what publication bias analyses are adequate, and these techniques themselves can potentially yield divergent estimates of publication bias, it is best to use some battery of publication bias effect estimation techniques for the time being.
Finally, there is the nagging concern I have that once a meta-analysis gets published, if it is treated as the last word, future research pertaining to that particular research question has the potential to effectively cease. Yes, some isolated investigators will continue to conduct research, but with much less hope of their work being given its due than it might have otherwise. I suspect that we can look at research areas where a meta-analysis has indeed become the proverbial "last word" and find evidence that is exactly what transpired.Given reasons one and two above, that would be concerning, to say the least. There is at least one research literature with which I am intimately familiar where I suspect one very important facet of that literature effectively halted after what became a classic meta-analysis was published. At some point in the near future, I will turn to that research literature.
The second obvious reason is that meta-analysis itself is evolving. The techniques for synthesizing studies addressing a particular hypothesis are much more sophisticated than when I began my graduate studies, and are bound to continue to become more sophisticated going forward. The techniques for estimating mean effect sizes are more sophisticated, as are the techniques for estimating the impact of publication bias and outlier effects. If anything, recent meta-analyses are alerting us to what should have been obvious a long time ago: we have a real file drawer problem, and the failure to publish null findings or findings that are considered no longer "interesting" is leading us to have a more rose-colored view of our various research literatures than is warranted. Having said that, it is also very obvious that since we cannot quite agree among ourselves as to what publication bias analyses are adequate, and these techniques themselves can potentially yield divergent estimates of publication bias, it is best to use some battery of publication bias effect estimation techniques for the time being.
Finally, there is the nagging concern I have that once a meta-analysis gets published, if it is treated as the last word, future research pertaining to that particular research question has the potential to effectively cease. Yes, some isolated investigators will continue to conduct research, but with much less hope of their work being given its due than it might have otherwise. I suspect that we can look at research areas where a meta-analysis has indeed become the proverbial "last word" and find evidence that is exactly what transpired.Given reasons one and two above, that would be concerning, to say the least. There is at least one research literature with which I am intimately familiar where I suspect one very important facet of that literature effectively halted after what became a classic meta-analysis was published. At some point in the near future, I will turn to that research literature.
Subscribe to:
Posts (Atom)