Previously, I noted that the latest Zhang et al. (2020) paper had at least one serious error: that instead of computing a difference between reaction time for aggressive (weapon) images and neutral images, the authors used simply the reaction times to the weapon images as the DV. Hence, we as the readers are left with a misleading set of analyses and a potentially misleading narrative. Fortunately, the authors had already shared their data, which made detecting the error fairly easy. Why the reaction times for the neutral images and then the difference scores (which would have been the real DV) didn't have their own column is only something that the authors can answer.
Oftentimes, with this lab (as is probably the case with others), it is often difficult to glean whether or not variables are entered and computed correctly based on the information appearing in a published paper. Whether those omissions are a bit of sleight of hand or simple human error or misunderstanding is often difficult to deduce. However, sometimes authors make it easy for the readers to see for themselves that the authors have goofed. I have found some rather odd analyses in which IVs were not quite analyzed correctly as well as DVs.
One of my favorite papers published by the Zhang lab, just for the sheer madness it contained, was the on published in Personality and Individual Differences nearly five years ago. That was the first, and I think only, effort these authors made to replicate and extend research on the weapons priming effect (itself a fairly controversial topic). The DV situation appears okay in the initial analysis under section 4.1. However, where things fall apart (aside from a grossly undersized df, given sample size) was that the authors only examined the difference in reaction times between aggressive and neutral words under the weapon prime condition, while completely ignoring the neutral prime condition. The authors eventually did correct the df for that section in a pretty massive corrigendum. However, they never did address that they had done the wrong analysis in order to establish a weapons priming effect. They really should have read more carefully Anderson et al. (1998) in order to do so. The authors needed to establish that the difference between rts in the treatment and control conditions were larger, and in the predicted direction, when participants saw weapons than when they were presented with neutral images. Also left unanswered was the nagging question of the three-way interaction effect that was a duplicate of another three-way interaction effect in another paper authored by this same research team. I got the impression that the current editor in chief at Personality and Individual Differences was not much in the mood for dealing with this mess to begin with, and that any superficial corrections were extracted from Zhang et al. (2016) was probably a minor miracle. In theory, since the authors changed a single digit in the F-test for the three-way interaction, perhaps the point is now moot. I am still concerned that a certain amount of self-plagiarism happened, but the editor-in-chief chose to let it go. As was the case with the most recent article in question, the Zhang lab had enlisted an established American aggression researcher, Phillip Rodkin. Rodkin's wheelhouse was more in the area of bullying, and not so much media violence, so this seemed like an odd choice for a collaborator for a media violence paper. I honestly don't know how much access Rodkin had to the original data, nor could I comment on whether he would have known what to look for when checking out the analyses. He had already been deceased for a while when this paper was published. Hence, we will likely never know.
The Zhang et al. (2016) paper shared something strikingly in common with a paper in which Zhang was second author, and Rodkin also was a collaborator. There was a three-way interaction that was deemed nonsignificant in each paper, although according to a Statcheck analysis, the three-way interaction would have to have been statistically significant based on what was originally reported in each paper. Publication of duplicate analyses is presumably serious business, but apparently the powers that be can overlook such matters. Perhaps the corrigendum on the Zhang et al (2016) paper makes the point moot, as I noted earlier. The erratum in the other paper entirely ignores the pesky issue of that three-way interaction effect.
As I have probably said too many times, I find this state of affairs to be very disappointing. As someone who still finds media violence research interesting (although definitely from the standpoint of a skeptic), I treasure efforts by researchers who study non-WEIRD populations. As an educator and researcher who is very eager to decolonize my particular areas of expertise, I would ordinarily welcome work coming out of China. Unfortunately, the work from this lab is so chock full of errors that it is best left uncited. Hold out for the real thing. Hold out for competently and ethically conducted work.