Tuesday, November 22, 2022

One thing mass shooters have in common

If you are someone who likes to place bets, one safe bet is that the next mass shooting perpetrator will have a history of domestic violence. The majority of mass shootings tend to be domestic violence-related. In other words, the perpetrator is attacking a spouse, an ex, and/or other family members. In other mass shooting incidents, the perpetrator may not be targeting partners or family, but have a history of domestic violence in their past. That often shows in police records, etc. So keep that in mind when looking for risk factors. 

The other factor that appears to interact with a history of domestic violence is consumption of violent rhetoric targeting specific social groups based on ethnicity, religion, gender/gender orientation, or sexual orientation. The recent mass shooting at an LGBTQ club in Colorado Springs appears to be tied to hateful rhetoric targeting the LGBTQ community, as well as legislation discriminating against our LGBTQ peers. Words on social media and elsewhere have consequences, and risk triggering those who already have histories of violence, and I would suspect histories of holding authoritarian attitudes. In the case of the latter, authoritarian aggression is the real concern. Authoritarians who believe that their legitimate authority figures consider targeting a minority group for violence is acceptable are at risk to follow through, and at bare minimum are at risk to approve such behavior.

Saturday, August 6, 2022

The struggle continues: Zheng and Zhang (2016) Pt. 4

Let's focus on Study 1 of Zheng and Zhang (2016). It should have been fairly simple, at least in terms of data reporting. However it happened, the authors honed into two video games that they thought were equivalent in terms of any confound aside from violent content, and merely needed to run a pilot study to demonstrate that they could back that up with solid evidence. It should have been a slam-dunk.

Not so fast.

The good news is that, unlike Study 2, the analyses of the age data are actually mathematically plausible. That's swell. I noticed that the authors had some Likert-style questions to rate the games on a variety of dimensions, which makes sense. The scaling was reported to be on a 1 to 5 scale, in which 1 meant very low and 5 meant very high for each dimension. My intention was to focus on Tables 1 and 2. If a 1 to 5 Likert scale was used for each of the items used to rate the games, there were some problems. One glaring problem is that there is no way that there could be means above 5. And yet, for Violent Content and Violent Images dimensions, the mean was definitely above 5 in each case. That does not compute. I have no idea what scaling was used on the questionnaires actually used. I can perhaps assume a 1 to 7 Likert scale. Certainly doing so would make some means and standard deviations that seemed mathematically impossible seem at least with in the realm of plausibility. But there is no way to know. We do not have the data. We do not have any of the materials and protocols. We have to take everything on faith. I had intended to have a set of images of SPRITE analyses on Table 1 and Table 2, but didn't see the point. 

Then we have the usual problem with degrees of freedom. With a 2x2 mixed ANOVA, with game type as a repeated measure and "gender" as a between-subjects factor, the degrees of freedom would not have deviated much from the sample size of 220. I think we can all agree with that. Degrees of freedom below 100 would be impossible. And yet the analyses reported do just that. It does not help much that Table 1 is mislabeled as t-test results. If we assumed paired sample t-tests, degrees of freedom for each item would have been 219. Again, the reported degrees of freedom do not compute.

What I can say with some certainty is that Zheng and Zhang (2016) should not be included in any meta-analysis addressing violent video games and aggression or media violence and aggression. My efforts to address some of these issues with the editorial staff never went very far. It's so funny how problems with a published paper lead editorial staff to go on vacation. I get it. I'd rather be out of town and away from email contact when someone emails (with evidence) concerns about a published paper. Unfortunately, if the data and analyses cannot be trusted, we have a problem. This is precisely the sort of paper that, once published, ends up included in meta-analyses. Meta-analysts who would rather exclude findings that are, at best, questionable will be pressured to include such papers anyway. How much that biases the overall findings is clearly a concern. And yet the attitude seems to be to let it go. The attitude is that the status quo is sufficient. One flawed study surely could not hurt that much? We simply don't know. The same lab persisted, with samples of over 3,000, to publish research relevant to media violence researchers. Several of those papers ended up retracted. Others probably should have been, but probably won't due to whatever political reasons one might imagine. 

All I can say is the truth is there. I've tried to lay it out. If someone wants to run with it and help make our science a bit better, I welcome you and your efforts.

Sunday, July 31, 2022

The NeverEnding Story: Zheng and Zhang (2016) Pt. 3

Whenever I have a few seconds of spare time and feel like torturing myself, I go back to reading a paper I have blogged about previously (see here and here). Each reading reveals more errors, and my remarks over the previous blog posts reflect that. Initially I thought Study 1 was probably okay or less problematic than Study 2. However, Study 1 is every bit as problematic as Study 2. I think I was so overwhelmed by the insane amount of errors in Study 2 that I had no energy left to devote to Study 1. And I do want to circle around to Study 1. But first, I want to add one more remark to Study 2.

With regard to Study 2, I focused on the very odd reporting of degrees of freedom (df) for each statistical analysis, given that the experiment had 240 participants. I showed that if we were to believe those df to be correct (hint: we shouldn't), there were several decision errors. And to top it off, the authors try to write off what appears to be a statistically significant 3-way interaction as non-significant. That would still be the case even if the appropriate df were reported. The so-called main effect of violent video games on reaction time to aggressive versus non-aggressive goal words was inadequate. As noted before, not only were the df undoubtedly wrong, but the analysis does not compare the difference in reaction times between the treatment and control conditions. I would have expected either a 2x2 ANOVA demonstrating the interaction or the authors to compute the differences (in milliseconds) between aggressive and non-aggressive goal words for both the treatment and control groups, and then to compute the appropriate one-way ANOVA or t-test. Anderson et al (1998) took this latter approach and were quite successful. At least the authors offered means for that main analysis. In subsequent analyses, the authors quickly dispense with reporting means at all. In no case do the authors report standard deviations. That's the capsule summary of my critique up to this point. Now to add the proverbial cherry on top: the one time that the authors do report the mean and standard deviation together was when reporting the age of the participants, and even then the authors manage to make a mess of things. 

Recall that the authors had a sample of 240 children ranging in age from 9 to 12 years for Study 2. The mean age for the participants was 11.66 with a standard deviation of 1.23. Since age can be treated as integer data, I used a post-peer-review tool called SPRITE to make sure that the mean age and standard deviation were mathematically possible. To do so, I entered the range of possible ages (as provided by the authors), the target mean and standard deviation, and the maximum number of distributions to generate. To my chagrin, I got an error message. Specifically, I was informed by SPRITE that the target standard deviation I had provided, based on what the authors reported, was too large. The largest mathematically plausible standard deviation was 1.17. Even something as elementary as the mean and standard deviation of participants' age gets messed up. You can try SPRITE for yourself and determine if what I am finding is correct. My guess is you will. Below is the result I obtained. I prefer to show my work.





So Study 2 is not to be trusted at all. What about Study 1? It's a mess for its own reasons. I'll circle back to that in a future post.

Friday, July 29, 2022

A Blast From the Past: Retractions and Meta-Analysis Edition

I stumbled across this article, Media and aggression research retracted under scrutiny, and found it to be an interesting short read. The article's author chronicles some recent retractions, and what had been another on-going investigation of several papers coauthored by Qian Zhang of Southwest University. I've written enough about his work over the last few years. I think referring to many of Zhang's papers having "been called into question" is a fair assessment. 

Part of the story chronicles Samuel West, who included one of Zhang's papers in his meta-analysis at the request of a reviewer. His meta-analysis would undergo another round of peer review around the time he learned of that particular Zhang paper being under investigation at the same journal. Ouch. West certainly has legitimate concerns about including a potentially dodgy finding in his meta-analysis. In this case, the paper by Zhang and colleagues was not retracted, but I am sure West has his misgivings about including the paper in his database in the first place. I can certainly empathize. My most recent published meta-analysis included one of Zhang's papers that would eventually get retracted early this year. That said, there are plenty of papers generated from Zhang's lab with obvious problems, or, in the case of his more recent work, have problems that are more cleverly hidden. I agree with Amy Orben that the fact that problematic studies continue to remain in journals and meta-analyses is "a major problem" when we think about how politicized media violence research is. Requiring archiving of data, data analyses, and research protocols probably helps to the extent that it is required - at least anything that might be incorrect or fraudulent can more easily be sniffed out. Otherwise, one can only hope for sleuths with enough time on their hands and no concerns for career repercussions for blowing the whistle on published papers that should have never seen the light of day. Good luck with that.

I do take issue with Zhang's characterization of Hilgard as someone who is "just trying to make his name based just on claiming that everyone else does bad research." I get that Zhang is a bit sore about the retractions, and Hilgard was the person who contacted Zhang and a plethora of journal editors regarding the papers in question. That said, there was plenty of chatter about Zhang's work in 2018 and onward, and there were probably several of us who just wanted to know that we hadn't gone insane, and that the obvious data errors, including degrees of freedom that were inaccurate, means and standard deviations that were mathematically impossible, and tables that made no sense really were what we thought they were. Hilgard was far and away better connected to the sphere of media violence research as an active researcher himself, and had the data analytic know-how and the connections that come with being at a R-1 university to do what needed to be done. Aside from that, Hilgard made plenty of positive contributions to the methodology side of psychological science, and from interacting with him online and in person over the years, I'll simply say he's a good person to know. 

I think this article is somewhat helpful in pointing out that even those who believe there is a link between violent content in media (such as video games) and aggression can view Zhang's work and see it for what it is, and express an appropriate level of skepticism. At the end of the day, one can take a philosophical perspective that there is "no one right way to look at the data" and that's all well and good. But at the end of the day, if the analyses show decision errors, and the means and standard deviations forming the basis for those analyses are simply mathematically impossible, the only reasonable conclusion that can be made is that the data and analyses in their present form cannot be accepted as valid. 

The only bone I really have to pick is that the author characterizes the body of media violence research as asking the question of whether or not "violent entertainment causes violence". Although I am aware that there are researchers in this area of inquiry who would draw that conclusion, there are plenty of other investigators who view what we can learn based on our available methods much more cautiously (a lot of aggression is mild, after all). There are also plenty of skeptics who doubt that there is any link between media violence and even the mild forms of aggression that we can measure. As far as I am aware, there is no link between exposure to violent content in mass media and violent behavior in everyday life. All that said, this is a useful article that captures a series of events that I know quite intimately. 

Suddenly, I am in the mood for some cartoon violence. I think I'll watch some early episodes of Rick and Morty. Goodnight.

Monday, July 25, 2022

The 50th anniversary of the article that brought an end to the Tuskegee Syphilis Study

Here's an article I strongly recommend reading. The study itself is something I and my colleagues in my department discuss in our methods courses as an example of flagrantly unethical research. Although not a psychology study by any stretch, it is a cautionary tale of the abuses that have occurred (and can potentially occur) that exploit marginalized people.

Friday, July 22, 2022

Food for thought

 Read Academe is is suffering from foreign occupiers: Lessons from Vaclev Havel for a profession in decline. In this case the problem is one of how the academy is run, which is very much top-down, with an emphasis on branding trumping pretty much everything else. In some senses, it is reminiscent of existence in the Warsaw Pact version of Eastern Europe, as this author sees it. And we are suffering a brain drain in faculty and students as a consequence.

Friday, February 4, 2022

A long-overdue retraction

After sounding the alarm bells several years ago, a paper that I had failed to get retracted (the editor of PAID at the time offered a superficially "better" Corrigendum in 2019 instead) is now officially retracted. Dr. Joe Hilgard really put the work in to make it happen. Here is his story:

The saga of this weapons priming article is over. There are plenty of articles remaining that have yet to be adequately scrutinized.