Sunday, October 27, 2019

Erratum to Zhang, Zhang, & Wang (2013) has errors

This is a follow up to my commentary on the following paper:

Zhang, Q. , Zhang, D. & Wang, L. (2013). Is Aggressive Trait Responsible for Violence? Priming Effects of Aggressive Words and Violent Movies. Psychology, 4, 96-100. doi: 10.4236/psych.2013.42013

The erratum can be found here.

It is disheartening when an erratum ends up being more problematic than the original published article. One thing that struck me immediately is that the authors continue to insist that they ran a MANCOVA. As I stated previously:
It is unclear just how a MANCOVA would be appropriate as the only DV that the authors consider for the remaining analyses is a difference score. MANOVA and MANCOVA are appropriate analytic techniques for situations in which multiple DVs are analyzed simultaneously. The authors fail to list a covariate. Maybe it is gender? Hard to say. Without an adequate explanation, we as readers are left to guess. Even if a MANCOVA were appropriate, Table 4 is a case study in how not to set up a MANCOVA table. Authors should be explicit about what they are doing as possible. I can read Method and Results sections just fine, thank you. I cannot, however, read minds.

In essence, my initial complaint remains unaddressed.  One change, Table 4 is now Table 1, and it has different numbers in it. Great. I still have no idea (nor would any reasonably-minded reader), based on the description given, what the authors used as a covariate nor do I know what purported multiple DVs were used simultaneously. This is not an analysis I use very often in my own work, although I have certainly done so in the past. I do have an idea of how MANOVA and MANCOVA tables would be set up, and how those analyses would be described. I did a fair amount of that for my first year project at Mizzou a long time ago. The authors used as their DV a difference score (diff between RT aggressive words vs RT nonaggressive words), which would rule out the need for a MANOVA. And since no covariate is specified, a MANCOVA would be ruled out. I am going to make a wild guess that the partial summary table that comprises Table 1 will end up being nonsensical as have been similar tables generated in papers by this lab, including errata and corrigenda. I don't expect to be able to generate the necessary error MS, which I could then use to estimate the pooled SD.

I also want to note that the description of Table 2 as characterized by the authors and the numbers in Table 2 do not match up. I find that troubling. I am assuming that the authors mislabeled the columns, and intended for the low trait and high trait columns to be reversed. It is still sloppy.
At least when I ran this document through Statcheck, the findings, as reported, appeared clean - no inconsistencies and no decision inconsistencies. I wish that provided cold comfort. Since I don't know if I can trust any of what I have read in either the original document or the current erratum, I am not sure that I there is any comfort to be had.

What saddens me is that so much media violence research is based on WEIRD samples. That influences the generalizability of the findings. That also limits the scope of any skepticism I and my peers might have about media violence effects. We need good non-WEIRD research. So the fact that there is a lab that is generating a lot of research that is non-WEIRD, but is riddled with errors is a major disappointment.

At this juncture, the only cold comfort I would find is if the lot of the problematic studies from this lab were retracted. I do not say that lightly. I view retraction as a last resort, when there is no reasonable way for the record to be corrected without removing the paper itself. Doing so appears to be necessary for at least a few reasons. One, meta-analysts might try to use this research - either the original article or the erratum (or both if they are not paying attention) to generate effect size estimates. If we cannot trust the effect size estimates we generate, it's pretty much game over. Two, given that in a globalized market we all consume much of the same media (or at least the same genres), it makes sense to have evidence from not only WEIRD samples but also non-WEIRD samples. Some of us might try to understand just how violent media affect samples from non-WEIRD populations in order to understand if our understanding of these phenomena are universal. The findings generated from this paper and from this lab more broadly do not contribute to that understanding. If anything, the findings detract from our ability to get any closer to the truth. Three, the general public latches on to whatever seems real. If the findings are bogus - either due to gross incompetence or fraud - then the public is essentially being fleeced, which to me is simply unacceptable. The Chinese taxpayers deserved better. So do all of us who are global citizens.


No comments:

Post a Comment