Friday, January 29, 2021

Research Confidential: How Self-Correcting is Science?

The question is arguably rhetorical. Science in and of itself is not self-correcting. It takes living, breathing human beings to notice something is wrong, take the time and make the effort to report what is wrong to relevant stakeholders (e.g., journal editors, relevant university adminstrations, etc.), and then have good reason to believe that the relevant stakeholders will show due diligence, correct or retract flawed papers as needed, and otherwise hold those responsible for the flaws, whether due to sheer incompetence or fraud, accountable. In an ideal world, that is how it would work. In this world, it's considerably more complicated, and often more than a bit disheartening.

If you are a regular reader of this blog, you are quite aware of our favorite media violence researcher who is notorious for some of the worst papers published in that particular niche area of psychology - Qian Zhang of Southwest University. I have documented, over the last couple years or so, some of the most insane tables with means, standard deviations, and test statistics that simply are impossible to interpret. I have reported test statistics that, based on the degrees of freedom reported, would have to be incorrect. I have reported discrepancies between degrees of freedom for test statistics and the sample size reported. I have documented evidence of potential plagiarism and self-plagiarism - the latter due to the tendency for Zhang to rely heavily on copying and pasting from one paper to another. I have also found some amusing typos that resulted from Zhang's tendency to copy and paste tables from paper to paper. I've tagged Zhang's work as I have documented here (for your convenience) and on PubPeer under a pseudonym. 

Dr. Joe Hilgard has gone considerable further than have I. He's blogged about his own experiences in documenting problems with Zhang's work in great detail, and the efforts he's made to contact journal editors along with officials at Zhang's university, offering painstaking evidence of the problems he has discovered. You can read about Joe Hilgard's efforts, and the decidedly mixed and disappointing outcome of his efforts here. You should really take to the time to read Hilgard's post as it is thorough and damning. The short version? Some journal editors responded rather well, and in one case very quickly to retract two papers that were clearly unsound. Other journal editors have either stonewalled or ignored Hilgard's concerns. Zhang's university cleared him of wrongdoing, chalking it all up to Zhang being "deficient in statistical knowledge and research methods." So in other words, the university writes it off as "the guy's merely an idiot, but hey, let's just give him a remedial stats course and call it even." I agree with Hilgard that the university's failure to take action is not that surprising, as universities seem to be in the business of taking care of their own, especially if the researcher in question might be bringing in grants or other forms of prestige. So the guy maybe fudges some numbers and has no idea what random assignment means. There's nothing to see here. Move along.

My take on the matter is that the most charitable view that one could take based on the body of Qian Zhang's work is that this is a researcher who is grossly incompetent, but that a more probably defensible case can be made that his activities are on some level fraudulent. I am more inclined to the latter less charitable view. I've seen too much. Regardless, this is research that should have never made it past peer review. I agree with Hilgard that this body of research is very problematic given that as long as it remains published, it will distort our understanding of what is actually happening with stimuli such as video games that contain violent content on outcome variables such as aggressive behavior or cognition. Meta-analyses are especially vulnerable given that some of the reported findings by Zhang rely on large samples. Those results could artificially inflate effect sizes, leading meta-analysts and those consuming meta-analyses to believe that an overall effect is stronger than it actually is. 

This is one of the dark alleys I mentioned a few years ago. And given what Hilgard has experienced and what I've experienced in my own way, it's one that few leave with any sense of hope for the state of this particular are of psychological inquiry. If blatantly problematic papers, ones where the problems are so obvious that a beginning methods student could discover them, cannot be retracted within a short window of time, what is going on with work in which potentially fraudulent data analyses are more cleverly presented? What else is out there that cannot be trusted? That is something that should cause us all to lose some sleep.

One final thought for anyone thinking of collaborating with Zhang: don't. If you absolutely cannot help yourself, insist on seeing the data before agreeing to be part of that particular project. I'd say that is a safe practice regardless of the situation. If I take on a statistician for a project, or someone who is at least better versed in a particular statistical method than I am, I insist on sending the data set or database, and I expect that the statistician on the project will double check my work and ask difficult questions as needed. That can save a lot of grief, assuming that the statistician involved is actually looking at what is being sent. One of the tragedies for some of Zhang's coauthors is that they've never had access to the data sets to which they lent their names and reputations, nor were they apparently allowed access. That is not how we do science, folks.

In the meantime, more papers are in the pipeline to be published by this particular author, and it will become more of a struggle to keep up with the dross that is likely to be found in any of those papers. Again, that is something that should cause us all to lose some sleep.

No comments:

Post a Comment