Sunday, July 31, 2022

The NeverEnding Story: Zheng and Zhang (2016) Pt. 3

Whenever I have a few seconds of spare time and feel like torturing myself, I go back to reading a paper I have blogged about previously (see here and here). Each reading reveals more errors, and my remarks over the previous blog posts reflect that. Initially I thought Study 1 was probably okay or less problematic than Study 2. However, Study 1 is every bit as problematic as Study 2. I think I was so overwhelmed by the insane amount of errors in Study 2 that I had no energy left to devote to Study 1. And I do want to circle around to Study 1. But first, I want to add one more remark to Study 2.

With regard to Study 2, I focused on the very odd reporting of degrees of freedom (df) for each statistical analysis, given that the experiment had 240 participants. I showed that if we were to believe those df to be correct (hint: we shouldn't), there were several decision errors. And to top it off, the authors try to write off what appears to be a statistically significant 3-way interaction as non-significant. That would still be the case even if the appropriate df were reported. The so-called main effect of violent video games on reaction time to aggressive versus non-aggressive goal words was inadequate. As noted before, not only were the df undoubtedly wrong, but the analysis does not compare the difference in reaction times between the treatment and control conditions. I would have expected either a 2x2 ANOVA demonstrating the interaction or the authors to compute the differences (in milliseconds) between aggressive and non-aggressive goal words for both the treatment and control groups, and then to compute the appropriate one-way ANOVA or t-test. Anderson et al (1998) took this latter approach and were quite successful. At least the authors offered means for that main analysis. In subsequent analyses, the authors quickly dispense with reporting means at all. In no case do the authors report standard deviations. That's the capsule summary of my critique up to this point. Now to add the proverbial cherry on top: the one time that the authors do report the mean and standard deviation together was when reporting the age of the participants, and even then the authors manage to make a mess of things. 

Recall that the authors had a sample of 240 children ranging in age from 9 to 12 years for Study 2. The mean age for the participants was 11.66 with a standard deviation of 1.23. Since age can be treated as integer data, I used a post-peer-review tool called SPRITE to make sure that the mean age and standard deviation were mathematically possible. To do so, I entered the range of possible ages (as provided by the authors), the target mean and standard deviation, and the maximum number of distributions to generate. To my chagrin, I got an error message. Specifically, I was informed by SPRITE that the target standard deviation I had provided, based on what the authors reported, was too large. The largest mathematically plausible standard deviation was 1.17. Even something as elementary as the mean and standard deviation of participants' age gets messed up. You can try SPRITE for yourself and determine if what I am finding is correct. My guess is you will. Below is the result I obtained. I prefer to show my work.

So Study 2 is not to be trusted at all. What about Study 1? It's a mess for its own reasons. I'll circle back to that in a future post.

Friday, July 29, 2022

A Blast From the Past: Retractions and Meta-Analysis Edition

I stumbled across this article, Media and aggression research retracted under scrutiny, and found it to be an interesting short read. The article's author chronicles some recent retractions, and what had been another on-going investigation of several papers coauthored by Qian Zhang of Southwest University. I've written enough about his work over the last few years. I think referring to many of Zhang's papers having "been called into question" is a fair assessment. 

Part of the story chronicles Samuel West, who included one of Zhang's papers in his meta-analysis at the request of a reviewer. His meta-analysis would undergo another round of peer review around the time he learned of that particular Zhang paper being under investigation at the same journal. Ouch. West certainly has legitimate concerns about including a potentially dodgy finding in his meta-analysis. In this case, the paper by Zhang and colleagues was not retracted, but I am sure West has his misgivings about including the paper in his database in the first place. I can certainly empathize. My most recent published meta-analysis included one of Zhang's papers that would eventually get retracted early this year. That said, there are plenty of papers generated from Zhang's lab with obvious problems, or, in the case of his more recent work, have problems that are more cleverly hidden. I agree with Amy Orben that the fact that problematic studies continue to remain in journals and meta-analyses is "a major problem" when we think about how politicized media violence research is. Requiring archiving of data, data analyses, and research protocols probably helps to the extent that it is required - at least anything that might be incorrect or fraudulent can more easily be sniffed out. Otherwise, one can only hope for sleuths with enough time on their hands and no concerns for career repercussions for blowing the whistle on published papers that should have never seen the light of day. Good luck with that.

I do take issue with Zhang's characterization of Hilgard as someone who is "just trying to make his name based just on claiming that everyone else does bad research." I get that Zhang is a bit sore about the retractions, and Hilgard was the person who contacted Zhang and a plethora of journal editors regarding the papers in question. That said, there was plenty of chatter about Zhang's work in 2018 and onward, and there were probably several of us who just wanted to know that we hadn't gone insane, and that the obvious data errors, including degrees of freedom that were inaccurate, means and standard deviations that were mathematically impossible, and tables that made no sense really were what we thought they were. Hilgard was far and away better connected to the sphere of media violence research as an active researcher himself, and had the data analytic know-how and the connections that come with being at a R-1 university to do what needed to be done. Aside from that, Hilgard made plenty of positive contributions to the methodology side of psychological science, and from interacting with him online and in person over the years, I'll simply say he's a good person to know. 

I think this article is somewhat helpful in pointing out that even those who believe there is a link between violent content in media (such as video games) and aggression can view Zhang's work and see it for what it is, and express an appropriate level of skepticism. At the end of the day, one can take a philosophical perspective that there is "no one right way to look at the data" and that's all well and good. But at the end of the day, if the analyses show decision errors, and the means and standard deviations forming the basis for those analyses are simply mathematically impossible, the only reasonable conclusion that can be made is that the data and analyses in their present form cannot be accepted as valid. 

The only bone I really have to pick is that the author characterizes the body of media violence research as asking the question of whether or not "violent entertainment causes violence". Although I am aware that there are researchers in this area of inquiry who would draw that conclusion, there are plenty of other investigators who view what we can learn based on our available methods much more cautiously (a lot of aggression is mild, after all). There are also plenty of skeptics who doubt that there is any link between media violence and even the mild forms of aggression that we can measure. As far as I am aware, there is no link between exposure to violent content in mass media and violent behavior in everyday life. All that said, this is a useful article that captures a series of events that I know quite intimately. 

Suddenly, I am in the mood for some cartoon violence. I think I'll watch some early episodes of Rick and Morty. Goodnight.

Monday, July 25, 2022

The 50th anniversary of the article that brought an end to the Tuskegee Syphilis Study

Here's an article I strongly recommend reading. The study itself is something I and my colleagues in my department discuss in our methods courses as an example of flagrantly unethical research. Although not a psychology study by any stretch, it is a cautionary tale of the abuses that have occurred (and can potentially occur) that exploit marginalized people.

Friday, July 22, 2022

Food for thought

 Read Academe is is suffering from foreign occupiers: Lessons from Vaclev Havel for a profession in decline. In this case the problem is one of how the academy is run, which is very much top-down, with an emphasis on branding trumping pretty much everything else. In some senses, it is reminiscent of existence in the Warsaw Pact version of Eastern Europe, as this author sees it. And we are suffering a brain drain in faculty and students as a consequence.