Another article from the Zhang lab was published very recently. I do have to say that the format of the manuscript is well-done compared to many of the earlier papers. There are some initial concerns I will voice now. I may come back to this one later when and if the moment arises.
The literature review notes a number of meta-analyses that purport to provide support for media violence causing aggressive outcomes. The authors do offer a quick summary of several other meta-analyses that show that the average effect sizes from media violence research are negligible. Then the authors quickly state that the evidence is "overwhelming" that violent media leads to aggression. Um....not quite. There is actually a serious debate in which the extent that any link between exposure to violent content in films, video games, etc. and actual aggressive behavior, and the impression I get is that at best the matter is far from settled. The evidence for is arguably underwhelming rather than overwhelming. But hey, let's blow through all that research and at least check off the little box saying it was cited before discarding its message.
I am not too keen on the idea of throwing participants out of a study unless there is a darned good reason. Equipment malfunctions, failures to follow instructions, suspicion (i.e., guessing the hypothesis) would strike me as good reasons. Merely trying to get an even number of participants in treatment and control condition is not in and of itself a good reason. If one is inclined to do so anyway and state that the participants whose data were examined were randomly chosen, then at least go into some detail as to what that procedure entailed.
I will admit that I do not know China particularly well, but I am a bit taken aback that 15 primary schools could yield over 3,000 kids who are exactly 10 years of age. That is...a lot. Those schools must be huge. Then again, these kids go to school in a mega-city, so perhaps this is within the realm of possibility. This is one of those situations where I am a bit on the skeptical side, but I won't rule it out. Research protocols would certainly clarify matters on that point.
I am not sure why the authors use Cohen's d for effect size estimates for main effect analyses and then use eta square for the remaining ANOVA analyses. Personally I would prefer consistency. It's those inconsistencies that make me want to ask questions. At some point I will dive deeper into the mediation analyses. Demonstrating that accessibility of aggressive thoughts mediates the link between a particular exemplar of violent media and aggression is the great white whale that aggression researchers have been trying to chase for a good while now. If true and replicable, this would be some potentially rare good news for models of aggression derived from a social cognition perspective.
It is not clear if there were any manipulation checks included in the experimental protocols, nor if there was any extensive debriefing for suspicion - i.e. hypothesis guessing. In reaction time experiments that I ran, as well as any experiments using the competitive reaction time as a measure of aggression, it was standard operating procedure to not only have manipulation checks and an extensive debriefing with each participant, as problems like suspicion could contaminate the findings. Maybe those are procedural practices that have been abandoned altogether? I would hope not.
One of the most difficult tasks in conducting any media violence experiment is ascertaining that the violent and nonviolent media samples in question are as equivalent as possible except of course level of violent content. It is possible that the cartoon clips the authors use are perfectly satisfactory. Unfortunately we have to take that on faith for the time being.
At the end of the day, I am left with a gut feeling that I shouldn't quite believe what I am reading, even if it appears relatively okay on the surface. There are enough holes in the report itself that I suspect a well-versed skeptic can have a field day. Heck, as someone who is primarily an educator, I am already finding inconvenient questions and all I have done is give the paper an initial reading. This is my hot take on this particular paper.
Assuming the data check out, what I do appear to be reading thus far suggests that these are effects that are small enough to where I would not want to write home about them. In other words, in a best case scenario, I doubt this paper is going to be the one to change any minds. It will appeal to those needing to believe that violent content in various forms of mass media are harmful, and it will be either shrugged off or challenged by skeptics. I guess this was good enough for peer review. It is what it is. As I have stated elsewhere, peer review is a filter, and not a perfect filter. The rest is left to us who want to tackle research post-peer review.