Sunday, September 15, 2019

The more things change...

A few months ago, I wrote a very brief critique on the following paper:

Tian, J. & Zhang, Q. (2014). Are Boys More Aggressive than Girls after Playing Violent Computer Games Online? An Insight into an Emotional Stroop Task. Psychology, 5, 27-31. doi: 10.4236/psych.2014.51006.

At the time, I offered the following image of a table as it told a very damning story:

I noted at the time that it was odd for a number of reasons, not the least because of the discrepancy in one of the independent variables. The paper manipulates the level of violent content of video games. And yet the interaction term in Table 3 is listed as Movie type. That struck me as odd. The best explanation I have for that strange typo is that the lab involved was studying movie violence and video game violence, and there is a strong likelihood that the authors simply copied and pasted information from a table in another paper without doing any sufficient editing. Of course there were other problems as well. The F value for the main variable of Game Type could not be statistically significant. You don't even need to rely on statcheck.io to sort that one out. The table does not report the finding for a main effect of gender (or probably more appropriately, sex). The analysis is supposed to be a MANCOVA, which would imply a covariate (none of which appears reported) as well as multiple dependent variables (none of which are reported beyond the difference score in RTs for aggressive and non-aggressive words).

There were plenty of oddities I did not discuss at the time. There are the usual problems with how the authors report the Stroop task that they use. Also, there is a bit of a problem with the way the authors define their sample. Note that the title indicates that this research used a sample of children. However the sample seems more like adolesecents and young adults (ages ranged from 12 through 21 and the average age was 16).

So that was over five years ago. So, what changed? Turns out, not much. Here is the erratum that was published in July, 2019. The authors are still acting like they are dealing with a youth sample, when as noted earlier, this is a sample of adolescents and adults, at least according to the method section as reported, including any changes made. Somehow the standard deviation for participants' age changes, if not the mean. Odd. What they were calling Table 3 is now Table 1. It is at least appropriately referred to as an ANOVA. The gender main effect is still missing. The F tests change a bit, although it is now made more clear that this is a paper in which the conclusions will be based on a sub-sample analysis. I am not sure if there is enough information for me to adequately determine if the mean-square error term would yield a sensible pooled standard deviation that would make sense given the means and standard deviations reported in what is now Table 2. The conclusions the authors draw are a good deal different than what they would have drawn initially. From my standpoint, any erratum or corrigendum should correct whatever mistakes were discovered. This "erratum" (actually a corrigendum) does not. Errors that were in place in the original paper persist in the alleged corrections. I have not yet tried a SPRITE test to determine if the means and standard deviations that are now being reported are ones that would be plausible. I am hoping that someone reading this will do that, as I don't exactly have tons of spare time.

Here are some screen shots of the primary changes according to the erratum:



What is now called Table 2 is a bit off as well. I know what difference scores in reaction time tasks normally look like. Ironically, the original manuscript comes across as more believable, which is really saying something.

So did the correction really correct anything. In some senses, clearly not at all. In other senses, I honestly do not know, although I have already shared some doubts. I would not be surprised if eventually this and other papers from this lab are eventually retracted. We would be better served if we could actually view the data and the research protocols that the authors should have on file. That would give us all more confidence than is currently warranted.

In the meantime, I could make some jokes about all of this, but really this is no laughing matter for anyone attempting to understand how violent media influence cognition in non-WEIRD samples, and for meta-analysts who want to extract accurate effect sizes.

Saturday, September 14, 2019

Prelude to the latest errata

Now that there have been some relatively new developments regarding research from Qian Zhang's lab, I think the best thing to do is to give you all some context before I proceed. So let's look at some of the blog posts I have composed about the articles that are now being presumably corrected:

A. Let's start out with the most recent and work our way backwards. First, let's travel back to the year 2016. You can easily find this paper, which is noteworthy for being submitted roughly a year or so after its fourth author had passed away.

Tian, J. , Zhang, Q. , Cao, J. and Rodkin, P. (2016). The Short-Term Effect of Online Violent Stimuli on Aggression. Open Journal of Medical Psychology, 5, 35-42. doi: 10.4236/ojmp.2016.52005

See these blog posts:

"And bad mistakes/I've made a few"*: another media violence experiment gone wrong

 Maybe replication is not always a good thing

It Doesn't Add Up: Postscript on Tian, Zhang, Cao, & Rodkin (2016)

A tale of two Stroop tasks

B. Now let's revisit the year 2014. There is one article of note here. I had one post on this article at the time, and had wished I had devoted a bit more time to it. Note that in many of these earlier articles, Zhang goes by Zhang Qian, and for whatever reason, the journal of record recommends citing Qian as the family name. Make of that what you will. Following is the reference.

Tian, J. & Zhang, Q. (2014). Are Boys More Aggressive than Girls after Playing Violent Computer Games Online? An Insight into an Emotional Stroop Task. Psychology, 5, 27-31. doi: 10.4236/psych.2014.51006.

See this blog post:

Funny, but sad

The year 2013 brings us two papers to consider. I only devoted a single blog post to the first article referenced. The second article got referenced twice as I noticed the same oddity when it came to the way the authors were describing the Stroop task and analyzing data based on that task.

C. First we will start here with a basic film violence study.

Zhang, Q. , Zhang, D. & Wang, L. (2013). Is Aggressive Trait Responsible for Violence? Priming Effects of Aggressive Words and Violent Movies. Psychology, 4, 96-100. doi: 10.4236/psych.2013.42013

See this blog post:

About those Stroop task findings (and other assorted oddities)

D. And here is the article in which the authors use the Stroop task in a most remarkably odd way.

Zhang, Q. , Xiong, D. and Tian, J. (2013) Impact of media violence on aggressive attitude for adolescents. Health, 5, 2156-2161. doi: 10.4236/health.2013.512294

See this blog post:

Some more oddness (Zhang, Xiong, & Tian, 2013)

I could probably add some other work for context, as there are some pervasive patterns that show up across studies over the course of this decade. As the authors have begun to rely upon larger data sets, there are some other troubling practices, such as using only a small fraction of a sample to analyze data (something I vehemently oppose as a practice). Whether the articles are published in low-impact journals or high-impact journals is of no importance in one sense: poorly conducted research is poorly conducted research, and if it needs to be corrected, it is up to the authors to do so in as transparent and forthright a manner as possible. That said, as this lab is getting work published in higher impact journals, the potential for incorrectly analyzed data and hence misleading findings to poison the proverbial well increases. That should trouble us all.

I want to end with something I said a few months ago, as it is important to understand where I am coming from as I once more proceed:
Although I don't have evidence that the Zhang lab was involved in any academic misconduct, and I have no intention of making accusations to that effect, I do think that some of the data reporting itself is at best indicative of incompetent reporting. All I can do is speculate, as I am unaware of anyone who has managed to actually look at this lab's data. What I can note is that there is a published record, and that there are a number of errors that appear across those published articles. Given the number of questions I think any reasonable reader of media violence research might have, Zhang and various members of his lab owe it to us to answer those questions and to provide us with the necessary data and protocols to accurately judge what went sideways.
The reason I emphasize this point is because this is really not personal. This is a matter of making sure that those of us at minimum who do study media violence research have accurate evidence at our disposal.

Wednesday, September 11, 2019

Coming soon

I am starting to get my head around some relatively new Zhang lab errata. I have questions. Stay tuned.