I have been covering significant facets of this particular media violence article by Tian et al. (2016) elsewhere. You can read my previous post here. I had also noted earlier a weird apparent copy-and-paste situation regarding a reported non-significant 3-way interaction here. Finally I spent a bit of time on the Stroop task the authors purported to use, which you can read about here.
I think there is a theme that emerges if you read enough of the articles published by authors in Qian Zhang's lab. That them can be described as follows: It doesn't add up. There are just so many oddities when you read this particular article, or any of a number of their articles, that cataloging them all can be a bit overwhelming. I should never, ever, have to make a statement about that when I read anyone's work. I certainly don't expect absolute perfection, and I have certainly made my share of blunders. Those get fixed. Just the way it is.
So what am I on about this time? Let's get to the skinny. Let's note that there are definitely some problems with the way the authors assess trait aggressiveness. I won't go into the weeds as far as some questions about the Buss and Perry Aggression Questionnaire, although I will note that there have been some questions about its psychometric properties. When I have some time, maybe that would make for an interesting blog post. We know the authors just apparently translated the items to Chinese and left it at that. We get no reliability estimates other than the ones Buss and Perry originally provided. It's weird, and something I would probably ask about in a peer review, but it is what it is now that it's published. I am not exactly a big fan of artificially dividing up scores on a questionnaire into "high" and "low" groups. At least the authors didn't use a median split. That said, we have no idea what happened to those in the middle. I want to assume that participants were pretested and then recruited to participate, but I am not exactly confident that occurred. The authors really don't provide adequate detail to know how these groups were formed, and if those were the only participants analyzed. It would be something else if a whole subset of participants provided their data and were thrown out of these analyses without any explanation as to why that was done. We're just left to guess.
Means are just plain weird throughout this article. So, sometimes are standard deviations. Case in point:
Take a careful look. If you are anything like me, you might be looking at that and shaking your head. Perhaps you might even mutter, "Whiskey. Tango. Foxtrot." There is absolutely no way to convince me that a mean of 504.27 is going to have a standard deviation of 549.27. That is one hell of a typo, my friends. Now take a look at the means that go with Table 2. The presumed means for Table 1 are from the whole sample, yeah? That's what I am assuming. But. But. But. What is with those subsample means in Table 2? The means for the Aggressive Words column just barely include the mean for Aggressive words for the whole sample. How does that happen? And those standard deviations? Then look at the Nonaggressive Words column. The mean in Table 1 could not exist if we were to get the average of the means for Nonaggressive Words in the violent and non-violent movie conditions.
Something does not add up. Think on this for a bit. Say you are planning a meta-analysis in which media violence (broadly defined) is the IV and you are examining any of a number of aggression-related outcomes as your DV. What are you going to use to estimate effect size, and more importantly are you going to trust the computations you obtain? If you are like me, you might pound the computer desk a couple times and then try to nicely ask the corresponding author for data. The corresponding author uses SPSS just like I do (I know this because I tracked down his website), so the process of reproducing analyses and so on would be seamless for me. Don't count on it ever happening - at least not voluntarily. I've tried that before and was stonewalled. Not even a response. Not kidding. Of course, I am just some nobody from a small university in Arkansas. Who am I to question these obviously very important findings? You know how it goes. The rabble start asking for data, and the next thing you know, it's anarchy.
What I am left with is a potential mess. I could do my best to employ whatever formulas I might use for effect size computations, but I would have to ask myself if what I was computing was merely garbage - and worse, potential garbage artificially inflating mean effect size estimates. So, what I would have is a study that should be included in a meta-analysis based on its content, but maybe should not be based on the reported analyses that appear to be at best incompetently reported (that would be the charitable take). I don't like being in that position when I work on a meta-analysis. Full disclosure: I thought an update on an old Bettencourt & Kernahan (1997) analysis on male-female differences as a moderator of aggression-inducing stimuli (such as violent media) would be a fun little project to pursue, and this article was one that would have otherwise been of genuine interest to me - especially since such differences are taken a bit more seriously than in the past, as have methods for assessing publication bias.
Look. If you read through that entire article, you will be stupefied by the sheer magnitude of the errors contained within. I have questions. How did this get past peer review? Did the editor actually read through the manuscript before sending it out to peer reviewers? Why was this manuscript not desk rejected? If these seem like leading questions, that is now my intention. This article should have never seen the light of day in its present form.
What to do? The editor has a clear ethical obligation to demand data from this lab. Those data should still be archived and analyzable. If there are privacy concerns, it is easy enough to remove identifying info from a data set prior to sharing it. No biggie. If the editor cannot get cooperation, the editor has an obligation to lean on the university that allowed this research to happen - in this case, Southwest University in China. The university does have an ethics council. I know. I looked that up as well. Questions about data accuracy should certainly be within the scope of that ethic council's responsibilities. At the end of the day, there is no question that some form of correction is required. I don't know if a corrigendum would suffice (with accurate numbers and a publicly accessible data set just to set things straight) or if a straight-up retraction is in order. What I do know is that the status quo cannot stand, man.
In the meantime, reader beware. This is not the only article from this particular lab with serious problems. It is arguably not even the most egregiously error-ridden article - and that is really saying something. Understanding how various forms of mass media influence how we think, feel, and behave strikes me as something of a worthwhile activity, regardless of what the data ultimately tell us. Cross-cultural research is absolutely crucial. But, we have to be able to trust that what we are reading is work where the authors have done due diligence prior to submission, and that everyone else in the chain from ethics committees to editors have done their due diligence as well. The legitimacy of the psychological sciences hangs in the balance.
No comments:
Post a Comment