Tuesday, September 28, 2021

One of my pet peeves when it comes to cognitive priming tasks in aggression

I'm probably going to come across as the late Andy Rooney for a moment. You know what I hate? Some of the apparent flexibility in how cognitive priming tasks get measured in my specialty area: aggression. I've spent some time with these sorts of tasks during my days as a doctoral student in Mizzou's aggression lab in late 1990s. The idea is fairly simple. We prime participants (typical traditional first-year college students) with stimuli that are either thought to be aggression-inducing stimuli (e.g., violent content in video games, images of weapons, violent lyrical content, etc.) or neutral stimuli (e.g., non-violent content in video games, images of non-weapons such as flowers, nonviolent lyrical content, etc.), and then get reaction times to aggressive and non-aggressive words. I'm probably most familiar with the pronunciation task, in which participants see, for example, an image for a few seconds (weapon or neutral object), followed by a target word that participants read aloud into a microphone, and the latency is recorded in milliseconds. The lexical decision task is similar, except in addition to reacting to aggressive or neutral words, the participants also must decide on whether or not what they are seeing is a word or non-word. At the end of the day, we get reaction time data, and look for latency measured in milliseconds.

For a prime to work, we expect that the relative latency for aggressive words will be lower than for neutral words in the treatment condition when compared to the relative latency for aggressive versus neutral words in the control condition. That's the pattern we found in both experiments in Anderson et al. (1998) and Lindsay and Anderson (2000), for example. The difference in latency between aggressive words and non-aggressive words was significantly larger, and in the predicted direction, in the weapon condition than was the case between aggressive words and non-aggressive words in the neutral prime condition. We could conclude that weapons appeared to prime the relative accessibility of aggressive cognition, or we could say aggression-related schemata or whatever nomenclature you might prefer. 

The way I was trained, and the literature I tended to read worked largely the way I just described, regardless the stimuli used for primes and regardless of the target words or concepts the experimenters were attempting to prime. In our case, comparing the relative difference in reaction time latencies between responses to aggressive and non-aggressive words gave us a basis for comparison across treatment condition, and took into account some of the noise we would likely get in the data, such as individual differences in reaction time speed. 

Lately I have seen in my corner of the research universe papers published in which the authors only publish reaction time latencies for aggressive words, even though they admit in their published reports that they did have reaction time data for non-aggressive words. They appear to be getting statistically significant findings, but I find my self asking myself a question: so you find that participants respond faster to aggressive words in the treatment condition than in the control condition. That's nice, but what do those reaction time findings for aggressive words alone really tell us about the priming of the relative accessibility of aggressive cognition? In more lay terms, you say you found participants respond faster to aggressive words, but compared to what? I have also seen the occasional paper slip through in which the authors attempt to have it both ways. They'll use raw aggressive word reaction times as their basis for establishing that there is a priming effect, but their other hypothesis tests actually do use what I see as a proper difference score between aggressive and non-aggressive words. Oddly enough, in one presumably soon-to-be retracted number, when the authors use the approach I was taught, the effect size for the treatment condition becomes negligible, and the authors have to rely on subsample analyses in order to make some statement about the treatment condition actually priming the relative accessibility of aggressive cognition. Now, when I see subsequent research where only the reaction times for aggressive words are reported, I wonder if what I am reading is to be trusted, or if something is being hidden from those of us relying on the accuracy of those reports. 

That is the sort of thing that can keep me awake at night.

Tuesday, September 21, 2021

The jamovi MAJOR module

I've been switching over most of my data analyses to jamovi (as a very tentative step toward learning R), as well as switching my instruction to jamovi. Overall, I love the interface, and will probably say more about it later. For now I just want to say a few brief words about the MAJOR module, which is meant to interface with Metafor, which is a meta-analytic package for R. MAJOR is very intuitive, and I've found it relatively easy so far to reproduce basic analyses from distributions from prior meta-analysis I've worked on. It produces helpful forest plots and funnel plots. It meets my basic needs. When it comes to publication bias, I wish there were more options available. As of now, MAJOR offers Fail-safe N (and my advice has been that friends do not let friends use Fail-safe N) and Egger's test to detect potential publication bias. I hope the developers of MAJOR plan on adding on more publication bias options, even if just trim-and-fill analyses (fixed and random). That said, meta-analysis has moved way beyond any of the above techniques, and I'd love to see other publication bias techniques included, (PET-PEESE comes to mind). That would be helpful. Otherwise, I am quite happy with what I've been able to do so far. Kudos to the developers of MAJOR for what they have done so far.