Sunday, September 15, 2019

The more things change...

A few months ago, I wrote a very brief critique on the following paper:

Tian, J. & Zhang, Q. (2014). Are Boys More Aggressive than Girls after Playing Violent Computer Games Online? An Insight into an Emotional Stroop Task. Psychology, 5, 27-31. doi: 10.4236/psych.2014.51006.

At the time, I offered the following image of a table as it told a very damning story:

I noted at the time that it was odd for a number of reasons, not the least because of the discrepancy in one of the independent variables. The paper manipulates the level of violent content of video games. And yet the interaction term in Table 3 is listed as Movie type. That struck me as odd. The best explanation I have for that strange typo is that the lab involved was studying movie violence and video game violence, and there is a strong likelihood that the authors simply copied and pasted information from a table in another paper without doing any sufficient editing. Of course there were other problems as well. The F value for the main variable of Game Type could not be statistically significant. You don't even need to rely on statcheck.io to sort that one out. The table does not report the finding for a main effect of gender (or probably more appropriately, sex). The analysis is supposed to be a MANCOVA, which would imply a covariate (none of which appears reported) as well as multiple dependent variables (none of which are reported beyond the difference score in RTs for aggressive and non-aggressive words).

There were plenty of oddities I did not discuss at the time. There are the usual problems with how the authors report the Stroop task that they use. Also, there is a bit of a problem with the way the authors define their sample. Note that the title indicates that this research used a sample of children. However the sample seems more like adolesecents and young adults (ages ranged from 12 through 21 and the average age was 16).

So that was over five years ago. So, what changed? Turns out, not much. Here is the erratum that was published in July, 2019. The authors are still acting like they are dealing with a youth sample, when as noted earlier, this is a sample of adolescents and adults, at least according to the method section as reported, including any changes made. Somehow the standard deviation for participants' age changes, if not the mean. Odd. What they were calling Table 3 is now Table 1. It is at least appropriately referred to as an ANOVA. The gender main effect is still missing. The F tests change a bit, although it is now made more clear that this is a paper in which the conclusions will be based on a sub-sample analysis. I am not sure if there is enough information for me to adequately determine if the mean-square error term would yield a sensible pooled standard deviation that would make sense given the means and standard deviations reported in what is now Table 2. The conclusions the authors draw are a good deal different than what they would have drawn initially. From my standpoint, any erratum or corrigendum should correct whatever mistakes were discovered. This "erratum" (actually a corrigendum) does not. Errors that were in place in the original paper persist in the alleged corrections. I have not yet tried a SPRITE test to determine if the means and standard deviations that are now being reported are ones that would be plausible. I am hoping that someone reading this will do that, as I don't exactly have tons of spare time.

Here are some screen shots of the primary changes according to the erratum:



What is now called Table 2 is a bit off as well. I know what difference scores in reaction time tasks normally look like. Ironically, the original manuscript comes across as more believable, which is really saying something.

So did the correction really correct anything. In some senses, clearly not at all. In other senses, I honestly do not know, although I have already shared some doubts. I would not be surprised if eventually this and other papers from this lab are eventually retracted. We would be better served if we could actually view the data and the research protocols that the authors should have on file. That would give us all more confidence than is currently warranted.

In the meantime, I could make some jokes about all of this, but really this is no laughing matter for anyone attempting to understand how violent media influence cognition in non-WEIRD samples, and for meta-analysts who want to extract accurate effect sizes.

No comments:

Post a Comment