Saturday, February 16, 2019

Why due dilligence matters

About a couple months ago, I made some oblique reference to a series of articles published by authors from what appears to be a specific lab. I am still holding off on a more detailed post, although I doubt I will hold off for too much longer. Obviously, I suspect whatever decisions are being made by journal editors are still in the pipeline regarding those particular articles that have yet to be corrected.

What I do want to note is what my role is as an educator and scholar when it comes to post-peer review. My goal when presenting contemporary research is to provide something cutting edge that may not yet appear in textbooks. It's really cool to be able to point to some experimental research in, say, China, and note that some phenomenon either does or does not appear to replicate across cultures - especially since so much of the research on media violence and more narrow phenomena like the weapons effect are often very much based on American and European samples. However, if in the process of looking over articles I might wish to share with my students or use as citations for my own work I notice problems, I can't just remain silent. Since there is the possibility that I may be misreading something, I might initially examine some basics: do the methods as described match what should occur if I or my peers were to run an equivalent study? If yes, then maybe I need to lay off. If no, then it's time for a bit of a deep dive. Thanks to the work of Nuijten et al. (2016) - a public version of their article can be found at osf) - we know that there are often mistakes in the statistical findings reported in Results sections. So, I like to run articles I find of potential use through Statcheck (the web-based version can be found here). If I find problems, then it comes down to how to proceed. I might initially try contacting the authors themselves. Let's say that for the lab in question I initially did while in the midst of working on a meta-analysis where some of their work was relevant to me. Now if the authors are willing to respond, great! Maybe we can look at data analyses, and attempt to reproduce what they found. Honest mistakes happen, and generally those can and should be fixable. But what if the authors are not interested in communicating? I don't know an easy answer beyond contact a relevant editor, and share my concerns.

Note the purpose of the exercise is not to try to hang a fellow researcher (or team of researchers) out to dry. That's a lousy way to live and I have no interest in living that way. Instead, I want to know that what I am reading and what my students might read is factual, rather than fiction. Having the correct data analyses at hand affects me as a researcher as well. If I am working on a meta-analysis and the analyses and descriptive statistics upon which I may be basing my effect size calculations are wrong, my ability to estimate some approximation of the truth will be hampered, and what I might then communicate with my peers will be at least somewhat incorrect. I don't want that to happen either. I just want this to be clear: my intentions are benign. I just like to know that I can trust what I am reading and I can trust the process by which authors arrive at their conclusions. Having been on the other side of the equation, finding out that I had made a serious but correctable error led to a much better article once all the dust had settled. Being wrong does not "feel" good in the moment, but that is not the point. The point is to learn and to do better.

We as researchers are humans. We are not perfect. Mistakes just come with the territory. Peer review is arguably a poor line of defense for catching mistakes. Thankfully the tools we have in place post-peer review are much better than ever. In the process of evaluating work post-peer review, we can find what got missed, and hopefully help make our little corners of our science a little better. Ideally we can do so in a way that avoids unnecessary angst and hard feelings. At least that is the intention.

So when I spill some tea in a few months, just be aware that I will lay out a series of facts about what got reported, what went sideways, and how it got resolved (or not resolved). Nothing more, nothing less.

Reference:

Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985-2013). Behavior Research Methods, 48 (4), 1205-1226. DOI: 10.3758/s13428-015-0664-2

No comments:

Post a Comment