In many respects, I am a flawed messenger when it comes to understanding what business as usual means in my corner of the scientific universe, and how and why we need to improve how we go about our work when we report data and when we publish anything from research reports to literature review articles and book chapters. I don't pretend to be anything else. I am pretty open about my own baggage because I prefer going to sleep each night with a clean conscience. It is safe to assume that I am highly self-critical. It is also safe to assume that if I am critical of others, it is not out of malice, but rather out of concern about leaving our particular science better than it was when we began.
If you are visiting here, there is a chance that you saw an article that included work I coauthored that was retracted (one article) and required major revisions after a database error (one article in which the corrected version is now thankfully available). I have written about the latter, and will minimize my comments on it. The former does require some explaining. Thankfully, the journalist who covered the story was very fair and largely kept my remarks about both articles intact. Anything that Alison McCook left out was, I suspect, largely because she was not able to independently verify my statements from those sources in question.You can read McCook's article for yourself. I don't have tons to add. The main take-away is that using a prior published work as a template for a subsequent manuscript is a very bad idea, and none of us should do so. In the interest of time, cutting corners while drafting a manuscript is tempting, but the consequences can be disastrous. I agree with McCook that whether out of carelessness (as was the case here) or maliciousness, duplicate content is far from victimless. It's a waste of others' time, resources, and adds nothing of value to the literature. I have learned my lesson and will move on.
Just to add a bit, let's talk a bit about the process. Brad Bushman was guest editor for an issue of Current Opinion in Psychology, an Elsevier journal. He invited me to coauthor an article on the weapons effect. I agreed, although was a bit worried about spreading myself too thin. I expected that since we had collaborated well together on an earlier review article on weapons as primes of aggressive thoughts and appraisal, I could count on him to contribute significantly to the manuscript this go around. So that was late 2016. Bushman asked me to wait before drafting the manuscript until we had updated analyses from our meta-analysis on the weapons effect (it had been going through the peer review process for quite some time). By the time we had those results, we were up against a deadline. I used the manuscript from our prior work as a template, added a section on aggressive behavioral outcomes when exposed to weapons, and began to redo some of the other sections. I then kicked it over to Bushman. When I got a revision from him, based a superficial scan of the manuscript, it appeared as if the entire document had been changed. Later I found out that he only did minor revisions and was largely focused on his dual role as guest editor and coauthor on four other papers for that edition. I have some questions about the wisdom in a guest editor being so highly involved as a coauthor, given the workload involved, but maybe some people can handle that. I still think it is a bad idea. In other words, the collaboration I expected never occurred, and I was too careless to notice.
Now, let's talk about the upload portal and peer review process, as it had numerous problems. McCook could never get anyone from Elsevier to address her questions. So what I am going to do is summarize what I disclosed to McCook. I will note that I can back up every claim I am making, as I still have the original emails documenting what happened safely archived. I can produce the evidence that backs up my understanding of what unfolded. When the deadline approached, Evise appeared to be down. I don't know if that was system-wide, specific to the journal, or specific to something the guest editor did, or failed to do. Hence I will refrain from any speculation on that front. What I can note is that Bushman as guest editor had to ask all of the primary authors to email our manuscripts to him, and that he would then distribute them to whoever was assigned to review them. The peer review process was extremely rapid. I think we received an email from Barbara Krahe two days after emailing our submission with suggestions for minor revisions. Krahe was apparently the action editor for the special edition. I do not know if she was the peer reviewer as well. Bushman acted at one point as if she might have been. I cannot confirm that. I made the revisions, and then had to email the revised manuscript to one of the journal's editorial staff members, April Nishimura, in order for her to upload the manuscript manually through Evise. That portal never did work properly. The upshot is that the whole process was backwards and far too rapid. In an ideal situation, the manuscript would have been uploaded to the portal, checked for duplicate content, and then and only then sent out for review. As the Associate Publisher responsible for Current Opinion in Psychology (Kate Wilson) unwittingly disclosed to me, Elsevier's policy about checking for duplicate content is that it may occur, which is a far cry from a guarantee that each document will be checked. Had the process worked properly, this manuscript would have been flagged, I would have been taken to task then and there, and whatever corrective action needed to occur would have happened long before we reached an eventual retraction decision. The review process was so minimal that I seriously doubt that much time or care was put into inspecting each document. Eventually I would see proofs, and would have all of maybe a couple days to accept those. Basically I was always uncomfortable with how that process unfolded. Although I ultimately shoulder the lion's share of the burden for what happened, as is the lot of any primary author, I cannot help but wonder if a better system would have led to a better outcome. When alerted early this year that there was a problem with this particular article, I got in contact first with April Nishimura, who then eventually connected me with Kate Wilson. I made sure these individuals were aware of what was disclosed to me and to Bushman about possible duplicate content, offered a corrected manuscript in case that might be acceptable, and then asked them to investigate and make whatever decision they deemed necessary. Given that Bushman was very worried about another retraction on his record, I did pursue alternatives to retraction. After several conversations, including a phone conversation with the Executive Publisher, I felt very comfortable with the retraction decision. It was indeed the right call to make in this instance. Thankfully, only one person ever cited that article, and that was Bushman. It could have been far worse. Given the problems I experienced in early 2017 with the whole process, I had already decided I would be highly reluctant to ever accept any further invitations to publish my work in that particular journal. The lack of a proper screening of manuscripts prior to review, the minimal peer review process, and the general haste with which each edition is put together lead me to question if this journal adds any value to the science of psychology. I have had far better experiences with journals with much lower impact factors.
As for the meta-analysis, I never saw asking for a retraction as a first option. I am aware that PSPR did consider retraction as one of the options, but opted to allow us to submit a major revision of the manuscript based on analyses computed from the database once we corrected the flaw that Joe Hilgard detected when he examined the data. As an aside, Hilgard has always been clear that the particular error I made when setting up the database was surprisingly common and that what happened was an honest mistake. There is one circumstance in which I would have insisted on a retraction, and thankfully that never came to pass. In early October, 2017, Bushman floated the idea of removing me as the primary author and replacing me with the second author, Sven Kepes. Thankfully Kepes balked at the idea. After all, as he noted, his expertise was not in the weapons effect, and placing him in the primary author role would not be something he was comfortable with. Nor was I comfortable with that idea, as I had conceptualized the project before Bushman even entered the picture, had done the bulk of the legwork on data entry, coding (including training one of my students to serve as a second coder), the initial data analyses in CMA, and much of the writing. Had Bushman persisted, I would have declared the whole project unsalvageable, and expressed my concerns to the editor prior to walking away from what had been a significant amount of work. I question whether that would have been an appropriate course of action, but if placed in an untenable situation, I am not sure what the alternative would have been. Thankfully it never came to that. I and Kepes both put in considerable time over the next couple months and ultimately determined that the database was correct, that we could each independently cross-validate the computations in the database, and at that point redid our analyses. I ran a handful of supplemental analyses after Kepes was no longer available, primarily to appease Bushman, but I make it very clear that those need to be interpreted with considerable caution. The updated article is one I am comfortable with, and it essentially suggests that we need to do a serious rethink of the classic Berkowitz and LePage (1967) article that effectively launched the weapons effect as a line of research. The proper conclusion at this point is that there is not sufficient evidence that the mere presence of a weapon influences aggressive behavioral outcomes. I am not yet ready to write off the possibility that the original findings hold up, but I am well aware that this is a line of research that could well be one of many zombie phenomena in my field. Oddly enough, I am comfortable with that possibility, and am eagerly awaiting large-sample experiments that attempt on some level to replicate the weapons effect as a behavioral phenomenon. If those replications fail, the line of research needs to be viewed as one that simply did not stand the test of time. It happens. I think it is likely that Bushman and I do not see eye to eye on how to interpret the meta-analytic findings. In hindsight, we came into this project with somewhat divergent agendas, and the end result is that the data have converted me from someone who saw this as a plausible phenomenon to someone who is considerably more skeptical. That skepticism is reflected in the revised article. That skepticism will be reflected in any subsequent work I publish on the weapons effect, unless or until such time that subsequent evidence suggests otherwise. I think there is a lesson to be learned from this meta-analysis. For me the main takeaway is to take concerns about potential errors in one's work seriously, and to cooperate with those who are critical of one's findings. We really need more of that - openness and cooperation in our science. We also need to make sure that those who do the hard work of double checking our findings post-publication are reinforced for doing so. No one likes to be the bearer of bad tidings, but if the scientific record needs correcting it is crucial that those who notice speak out. It is also critical that those who need to make corrections do so proactively and with a healthy dose of humility. We're only human, after all.
One final note: a significant part of my identity is as someone who has some expertise on the weapons effect. After all, two of my earlier publications examined facets of the weapons effect, including some potential limitations of the effect. Realizing that this line of research is not what it appeared to me and to perhaps many others required an adjustment in my point of view. In some fundamental sense, my identity remains intact, as I still know this literature fairly well. What does change is my perspective as I continue to speak and write about this line of research, as well as my thoughts on the theoretical models used to justify the weapons effect. What I have described is just one of the dark alleys that one might stumble into when making sense of social psychology. The process of doing research is sometimes quite ugly, and sometimes we make some very costly errors. There is something to be learned from our mistakes, and I am hopeful that what we learn will lead to better research as we continue our work. Then again, I have always been something of an optimist.
No comments:
Post a Comment