I am hoping you all are generally familiar with the Stroop Interference Task. It has been around a long time. If you need a refresher, this will at least be a start. In my neck of the research woods, some form of an Emotional Stroop Task is occasionally used. It works just like the original task. Individuals are presented with multiple trials in which a word is paired with a color and the individual is supposed to name the color. Of interest is reaction time to naming the color. If there is some sort of interference, individuals should respond more slowly to the stimuli than they would otherwise. In aggression research, an example might help. Here is a version of a Stroop task in which individuals are primed with either weapons or neutral objects:
And here are the results:
Notice that in the weapon priming condition, participants took significantly longer to respond to colors paired with aggressive words than neutral words. Contrast that with those participants in the neutral objects condition.
So far so good.
Now, here is an example of a Stroop task in which the prime stimulus is movie violence. Check out the results:
Of course, there is some other weirdness. We get overall mean reaction times for aggressive and non-aggressive words in Table 2. Those could not possibly be right if the means and standard deviations are correct in Table 3. Again, something is just not adding up. I could nitpick a bit and ask why there are no decimal points for the means in each cell, but there are decimal points for standard deviations. That is a very, very minor complaint. The larger complaint is that the authors are not going to be able to say that violent movies increase accessibility of aggressive thoughts based on what they presented using a Stroop task, and that there is a huge (I mean HUGE) disconnect between the overall mean reaction times in Table 2 and the cell means provided in Table 3. After that, all we get are mean difference scores. I don't necessarily have a problem with that, although in Table 5, there is no way that the cell means provided are going to square with the marginal means for violent and non-violent movie conditions. That is also problematic. And if those difference scores are to be believed, it would appear that short-term exposure to a violent movie might actually suppress accessibility of aggressive thoughts among the subsample measuring high in trait aggressiveness, which is not what the authors wish to argue.Again, that is highly problematic. I suppose that the authors made some typos when they were writing up the paper? There is no way of knowing without access to the data set.
Also, Table 4 is just odd. It is unclear just how a MANCOVA would be appropriate as the only DV that the authors consider for the remaining analyses is a difference score. MANOVA and MANCOVA are appropriate analytic techniques for situations in which multiple DVs are analyzed simultaneously. The authors fail to list a covariate. Maybe it is gender? Hard to say. Without an adequate explanation, we as readers are left to guess. Even if a MANCOVA were appropriate, Table 4 is a case study in how not to set up a MANCOVA table. Authors should be explicit about what they are doing as possible. I can read Method and Results sections just fine, thank you. I cannot, however, read minds. Zhang et al. (2013) have placed us all in the position of being mind-readers.
Again, this is a strange article in which the analyses don't make much sense. Something is off in the way the descriptive statistics are reported. Something is off in the way that the inferential statistics are reported. The findings perhaps "looked" superficially good enough to get through peer review. It is possible that depending on the set of reviewers, the findings were ones that fit their own pre-existing beliefs - in other words some form of confirmation bias. I can only speculate on that, and will simply leave it at that. Speculation is not ideal of course.
This particular journal is not exactly a premier journal, which is hardly a knock on it. A number of low-impact journals do no worse, as near as I can tell, then the more premier journals when it comes to the quality of peer review. What matters is that this is an example of an article that slipped through peer review that arguably should not have. Given that this same lab has produced a number of subsequent articles with many of the same problems in more premier journals, I am justifiably worried.
Consider this a cautionary tale.
Zhang, Q. , Zhang, D. & Wang, L. (2013). Is Aggressive Trait Responsible for Violence? Priming Effects of Aggressive Words and Violent Movies. Psychology, 4, 96-100. doi: 10.4236/psych.2013.42013
Post a Comment