It's been a while since I last posted a Research Confidential. For those who wonder about the inspiration for that occasional post heading, credit the late Anthony Bourdain's book Kitchen Confidential. Much like Bourdain did in that book, I try to expose some of the dark alleys that one might encounter in social psychological research. My prior post reminded me of a situation where some of my own data ended up buried by my then-advisor while I was working on my PhD.
Let's set the scene a bit. The work I was heavily involved in as a PhD student dealt with media violence (broadly defined) and some of the routes to aggressive behavior as described in the General Aggression Model (or as it was then known, the General Affective Aggression Model). I was at University of Missouri's flagship campus in Columbia, MO (go Tigers!) at the time, and Craig Anderson was my advisor before he headed off to greener pastures at Iowa State University. None of that is a secret, of course. Evidence of my time at Mizzou can still be found on my CV, my Google Scholar profile, etc. Much of the work I was involved in at the time would not get published until the mid-aughts, which wasn't exactly great for my career prospects at the time, but that's another matter for another time.
Not every experiment I conducted worked out. That goes with the territory, but I had a pretty decent track record during my time in Anderson's lab, and later on under Ann Bettencourt's guidance. For now let's stay focused on my time with Anderson, who was my advisor until he moved to Iowa State. He and I discussed an experiment in which we would examine if violent content in rock and rap lyrics would prime aggressive cognition (i.e., make aggressive thoughts more accessible) and anger. In other words, we were going to test the cognitive and affective routes to aggression when individuals were exposed to violent lyrical content. We also included a humor manipulation although that had no bearing on the findings. Our dependent variables were a reaction time task called the pronunciation task, in which participants would see an aggressive or non-aggressive word and then say that word aloud into a microphone, at which point the computer would record reaction time in milliseconds (lower milliseconds for aggressive words versus neutral words would indicate more accessibility of aggressive cognition); and a state hostility scale that was essentially a measure of anger. A fellow research assistant, Janie Eubanks, ran a similar experiment using a different measure of aggressive cognition.
The experiment was a success. The main effect of aggressive lyrical content was statistically significant for both dependent variables. So far, so good. However, there was a significant interaction with participants' self-reported sex. Women showed quicker reaction time to aggressive words relative to neutral words than did men in the aggressive lyric condition. A similar pattern was found for anger. Now, extant aggression theories postulated that men were more prone to have higher accessibility of aggressive cognition and more prone to respond angrily than women. I was finding the opposite pattern in the data. So when Anderson and I looked at the findings, he was not especially thrilled. I had taken the liberty, as instructed, to compose a write-up of the findings, expecting that this would be a paper to be submitted for publication. It was organized in such a way that we divided the results for the cognitive route and affective route as separate "experiments" (a form of salami slicing, which is problematic in its own right, and which I have refrained from since I got out of grad school). The real problem for Anderson was that the pattern of findings did not conveniently fit the theoretical model, and so he didn't want us to go further with reporting. Since I am a bit of a pest, he eventually threw me a bone an allowed me to present the findings at a conference (probably Midwestern Psychological Association, or something along those lines). And that was it. Janie Eubanks' parallel experiment was more in line with theory, and it got published a few years later (for the record I am proud of Janie's accomplishment).
I had a fairly plausible explanation for the findings: The songs in both the aggressive and non-aggressive conditions were all performed by men, and in the case of the aggressive songs, let's just say the clips we played were probably a bit edgy. Perhaps female participants simply found the aggressive songs offensive and that could have led to them being showing more anger and greater accessibility of aggressive thoughts than the male participants in the same treatment condition. It's hard to know for sure, of course, as our debriefing protocols did not probe for the extent to which individuals would have been offended by the lyrical content, and that was something that in its own right would have been worth following up with another experiment. But that was not to be. The conclusion I was drawing was hardly profound, but it did point to a difficulty in conducting media violence research that deserved more attention - if nothing else than to encourage colleagues to do more pilot testing prior to running comparable experiments to the ill-fated lyric experiment I conducted. That could have led to a productive conversation, if my advisor had been interested, and who knows - perhaps the quality of media violence research would have been better as we approached the dawn of the 21st century. We'll never know. Not every experiment needs to be a theoretical breakthrough, and there's no law saying that all data analyses that we report have to conveniently align with a theory. This is hardly a profound insight, but human behavior is messy, and the endeavor to study human behavior objectively as humans is inevitably messy. Why not embrace that, I wonder now, as I did over a quarter century ago.
*Note: salami slicing is a practice of chopping up a single study's worth of data and analyses into multiple studies, either to give the illusion of a multi-study paper (editors at the time loved those) or multiple individual papers to inflate publication numbers. As a practice, salami slicing is not even considered an ethical gray area. It is unethical, and I advise early career researchers to refrain from such practices.
No comments:
Post a Comment