Tuesday, April 16, 2024

Postscript to the Preceding: Weapons Priming Effect and Potential Allegiance Effect

As I noted in my prior post, almost all of the experiments explicitly testing for a weapons priming effect (short term exposure to weapons increasing relative accessibility of aggressive cognition) involve the primary General Aggression Model theorists (Anderson and more recently Bushman) and/or their graduate students or associates. We are a very small group of individuals. The methods we use are strikingly similar, both in terms of independent variables and dependent variables. Over the years, we used very similar protocols when running our experiments. We used the same theoretical basis for our work. I can find one researcher citing any of our work independent of our clique who successfully replicated our findings (Korb, 2016), and she never published her Master's Thesis, as far as I am aware. With very few exceptions (e.g., Deuser, 1994), our experiments consistently found statistical significance. I often wonder if there were more independent efforts to replicate our basic findings, and if they were unsuccessful I would be curious to know what they thought happened, especially if they used protocols similar to the ones that we used. Otherwise, what we have is something of a niche area of inquiry that likely started and ended with just our cohort. I don't find that especially comforting.

Footnote: I am quite aware that Qian Zhang, who had a weapons priming effect paper retracted (Zhang et al, 2016), does still look at the weapons priming effect, and although his lab's findings on the surface are consistent with our own, I simply discard that work as I do not trust his reported descriptive and inferential statistics. Let's just say that GRIM and SPRITE tests tend to uncover mathematically impossible descriptive statistics in too many of his lab's findings. I shall leave it at that.

Sunday, March 31, 2024

The Weapons Priming Effect: A Brief History and a Word of Caution

After Carlson et al. (1990) published their meta-analysis, the weapons effect was considered an established phenomenon. Short-term exposure to weapons appeared to increase the level of aggressive behavioral responses in lab and field settings compared to short-term exposure to neutral stimuli. After 1990, there has been a dearth of research examining the effect of weapons on aggressive behavioral outcomes. Instead, there appeared to be a shift to examining the cognitive underpinnings of the weapons effect. Although there were a couple experiments in which participants were administered a Thematic Apperception Test (TAT) arguably as an attempt to assess if participants thought more aggressively when exposed to weapons (e.g., Frodi, 1975), it would not be until the 1990s until a small group of social psychologists would more explicitly assess if the mere exposure to weapons or weapon images primed aggressive cognition utilizing techniques pioneered by the Cognitive Revolution in psychology. 

By the mid-1990s, some aggression researchers were using schema or associative network theories as a means to understand the impact of various aggression-inducing stimuli on aggressive cognition, affect, and behavior. Craig Anderson, for example, was already developing a theory known at the time as the General Affective Aggression Model (GAAM), shortened to General Aggression Model (GAM; Anderson & Bushman, 2002) by the start of this century. In that model, individuals store aggression-related information in the form of cognitive schemas and behavioral scripts. Exposure to stimuli theoretically believed to be associated with aggression or violence would prime these schemas or scripts, leading to an increased accessibility of aggressive thoughts, and potentially leading to an increase in aggressive behavioral responses. One interpretation of the classic Berkowitz and LePage (1967) weapons effect experiment was that those participants in the control room containing rifles were primed to think more aggressively and hence, under high levels of provocation, respond more aggressively.

The first explicit effort to test for a weapons priming effect was in the dissertation of Deuser (1994), a student of Craig Anderson. The experiments in that particular dissertation did not demonstrate a weapons priming effect at all. It did not matter whether participants were exposed to weapons or neutral stimuli. There was no evidence to support the theory that weapons would prime aggressive thoughts. The first published evidence of a weapons priming effect was Anderson et al (1996), although the weapons priming effect was more secondary to the main purpose of the article, which was to establish the General Aggression Model as a theory and to test the effects of uncomfortable heat on aggressive cognition, affect, and attitudes. The weapons priming effect was tested on those participants who were not exposed to uncomfortable heat. Participants were exposed to either weapon or neutral object images and were given a Stroop test to assess accessibility of aggressive cognition. Participants primed with a gun had more aggressive thoughts than those primed with a neutral object. The effect was fairly small, which seems to be a theme with this line of research, but definitely noticeable. 

That finding by Anderson et al (1996) was promising. The next step was to examine if weapons truly semantically primed aggressive thoughts. Anderson et al (1998) conducted two experiments. This is where I and Bruce Bartholow (both graduate students at University of Missouri at the time) come in. I had already been exposed to the schema and script theories described in the article's introduction, which helped designing experiments considerably. I did a lot of the legwork to find a reasonably sensitive measure of accessibility of aggressive cognition, and eventually settled on a version of the pronunciation task, in which participants read the target word into a microphone, and the time it takes between onset of stimulus and when participants' voices are picked up on the microphone is measured in milliseconds. For our purposes, participants demonstrated relative accessibility of aggressive thoughts if they reacted faster to aggressive target words than non-aggressive target words. The only difference between our two experiments were the stimuli. In Experiment 1, the prime stimuli were weapon words versus animal words. In Experiment 2, the prime stimuli were a mix of weapon images or a mix of neutral images. We had participants go through a few blocks of trials in which participants would first see the prime and then speak into the microphone when they saw the target word. So participants would first see a weapon (word or image) or neutral (word or image) concept and then saw an aggressive or non-aggressive target word, which they pronounced into the microphone as quickly and accurately as possible. We expected participants to show the fastest reaction times when weapon primes were paired with weapon target words. In each experiment, our expectations were confirmed. The effect size for Experiment 1 was in between small and medium, and in Experiment 2 closer to that of Anderson et al (1996). 

Since the publication of Anderson et al (1998), there have been several successful efforts to replicate and in some cases extend that finding. The dependent variables may differ (e.g., lexical decision task, aggressive word completion task or AWCT), and the experimental design may either be between-subjects or within-subjects, but the basic concept is the same. Bartholow et al (2005) in Experiment 2 started out as a replication and extension, and initial drafts of the manuscript included the analysis demonstrating the replication of Anderson et al (1998). That analysis was deleted in the published version. Thankfully, I kept the file with the analyses, although I don't have the original data file any more. There is a story behind the delay between when we ran our experiments and publication date, but that was more of an error by the editor, and can be chalked up to what life was like before electronic submissions of manuscripts. The effect was smaller than in any successful experiment up to that point, but still statistically significant. I think of the cognitive experiment by Lindsay and Anderson (2000) as a solid conceptual replication. Bartholow and Heinz (2006) would also offer a good faithful replication as a means of demonstrating that the concept of alcohol had a similar effect on relative accessibility of aggressive cognition. Subra et al (2010) would offer a conceptual replication of Bartholow and Heinz (2006). Bushman (2017) and Benjamin and Crosby (2019) also successfully replicated and extended the Anderson et al (1998) experiments. 

This seems like a fairly rosy picture. But here is where I start to have concerns. Every experiment I have mentioned thus far involves either Anderson or Bushman or their associates or former advisees. I have blogged before about experimenter allegiance effects before in the context of understanding the discrepancy between findings Berkowitz and his various colleagues and independent researchers who failed to replicate a weapons behavioral effect, including the Buss et al (1972) direct replication attempt. One of my concerns is that we may have a similar phenomenon with our weapons priming effect research, but there has been almost no work done to independently try to replicate our findings. The Anderson et al (1998) paper is one I am still proud of, and it is cited fairly frequently even today. But there is a problem. Almost anyone who is independent of our cohort of researchers uses that paper as a means of justifying their own experiments on phenomena that are often entirely divorced from our research. There may be an allegiance effect that we are missing. Most experiments outside of the Anderson-Bushman cohort that could arguably be coded as weapons priming experiments are often secondary to the main thrust of the published papers or use dependent variables that could defensibly be used as proxies of aggressive cognition, although reasonable skeptics would certainly have questions and might challenge any judgment that the measures involved are truly proxies of aggressive cognition. At least one experiment that was purported to be a conceptual replication of Anderson et al (1998) was later retracted due to the research and data analyses being fundamentally flawed, if not outright fraudulent (see Zhang et al, 2016, which I've written about before). I've seen one successful independent replication of Bartholow et al (2005) that used a somewhat defensible dependent variable (Korb, 2016). Regrettably, publication never came to fruition.

We are left with a cohort of weapons priming effect researchers who have almost exclusively used the GAM as a theoretical model (which is itself deserving of challenge), and may have produced experiments that are unique to our particular cohort. Independent researchers might take our protocols and find something entirely different. Such researchers may find different and arguably better ways to test the same hypotheses we tested. Unfortunately, there appears to be no way to know, unless or until enough aggression researchers come forward and report their own replication attempts either as preprints or as publications. If independent researchers find something different to what my group of researchers found, I think that would be interesting and worth understanding. I would want to know what was different. It is possible and likely probable that independent researchers would consider things we would not have considered? How that would impact the overall body of weapons priming research is very much unknown. It could we be the case that there was just something unique about any of us who studied the weapons priming effect who were either Anderson or Bushman and their various students/associates and that our findings can be safely ignored moving forward. All I know is that the findings from an experiment I conducted back at the start of my doctoral career (Anderson et al, 1998, Experiment 2), are in line with the overall effect size for aggressive thoughts that I reported in a meta-analysis (Benjamin et al, 2018).  With regard to that meta-analysis, I will gladly defend any coding decisions made in collaboration with my third author, even if we clearly differ in how to interpret our findings once we had ascertained that the analyses of effect sizes were sound. 

One final thought. Although I was trained via the GAM theoretical model, I am not wed to it. I think a good case could be made that the weapons priming effect is little more than a cognitive response equivalent to classical conditioning, and that outside of possibly cognitive responses, there isn't a whole lot left to write about. I've said that in some other contexts, so I'll say it here. There are theorists (including philosophers) who are trying to place the whole body of weapons effect research into different theoretical frameworks. Their work is worth examining. 

In the meantime, I am very concerned that my particular line of weapons priming effect research is little more than an experimental allegiance effect. I won't know more unless or until other independent researchers come forward. If their findings are consistent enough with what I and my cohort found, that's swell. We've at least established a cognitive response to what should be an aggressive-inducing stimulus. If not, aggression researchers like me need to go back to the drawing board. That is also okay. From my own perspective, I made my peace with this line of research several years ago. I started siding with the skeptics for a reason, and that was because the skeptics had the better argument on matters of theory and on the findings themselves. With regard to the weapons effect, or the weapons priming effect, I started out without a horse in that race. I will likely end my career without a horse in this race. Turns out there is nothing wrong with that.


Thursday, February 29, 2024

Personal Update

It's been a while since I've said much about my own life. Leap Day seems as good a time as any. I have been quiet in terms of blogging and social media for a bit because I have simply been enormously busy. I have taken on some extra adjunct gigs to deal with the effects of inflation, since cost of living increases are very few and far between these days, and to pay down some family medical expenses. Much of my life is spent grading and making sure course links still work. It doesn't mean I don't go to conferences (I still do) or publish (I quietly finished a chapter on authoritarianism a few weeks ago), but that is not currently my primary focus. I have plenty I would love to discuss, but the time to really put the necessary thought into those topics simply does not exist. When it does, I will post here. In the meantime, cheers.

Tuesday, February 27, 2024

The psychology of anti-vax bias

Ron Riggio has a handy primer on how to be best informed about vaccinations, the sort of information to trust, and various biases that can be problematic. A lot of his post is a good application of basic judgment and decision-making research. As a general rule, I agree with getting information from reliable sources, and that social media are rife with disinformation. That said, I did follow a number of well-respected virologists and epidemiologists in what used to be Twitter between 2020 and 2022, and found their posts quite informative. Here's the catch: they also relied on reputable sources (e.g., CDC, rigorous empirical research, etc.).

Sunday, August 20, 2023

The decline of the public university

I'm going to point you to an article by Lisa Corrigan who writes about the recent "restructuring" of West Virginia University, and what it means for the rest of us who work in public colleges and universities, whether flagship institutions, or regional colleges and universities (like me). WVU's administrators hired a consulting firm to determine programs to put on the chopping block, and it is doing away with quite a number of majors, including all of its majors in the languages. About 16% of its faculty will be laid off in the process. I write this as my university is doing a program viability study, and I worry about what the outcome of its recommendations will be. 

If you want a more tl:dr version, Dr. Corrigan posted a thread on the platform formerly known as Twitter:

https://twitter.com/DrLisaCorrigan/status/1693252473152999436

The enrollment cliff has been used as a cudgel for much of my professional career - at least since the Great Recession came and went. What I suspect will happen is what Dr. Corrigan says quite bluntly: we will end up with a two-tiered higher education situation where the privileged will have more opportunities to enrich themselves intellectually, and the rest of our students, especially in rural public universities will just have to get used to fewer options. After all, workforce development is the big buzzword these days. We'll also see a future in which institutions operate with fewer full-time faculty, with the ensuing decline in quality that comes with understaffed programs. This was not the future I wanted for our next cohorts of students. Unless there is a huge fuss made regarding adequate funding for our institutions and a move back to ensuring academic freedom that is untouched by legislators, this is the future that awaits. It is bleak.


Tuesday, July 25, 2023

Political intereference in the classroom is increasing, and that should disturb all of us

Reading this story about how a simple guest lecture almost led to this Texas professor's firing was unsettling, to say the least. Although I don't know Dr. Joy Alonzo's work, it appears that she's a respected expert on the opioid crisis in the US. The content of her lecture, at least from the PowerPoint slides available, suggest she had a matter-of-fact presentation of the opioid crisis, as well as policies that could mitigate or exacerbate the problem. She just happens to work in Texas, which has arguably managed to make that particular situation worse. Any expert who understands the impact of public policy is inevitably going to end up saying something when state or national policies are doing more harm than good. Politicians may not like that fact, nor may partisans of any stripe, but that is how professionals work. Her reward for offering her expertise to a class at another university was to end up on paid leave and investigated - and nearly fired. Why? Some of the content of the lecture might have offended the Lt. Governor of Texas. She managed to dodge a bullet, as no evidence of wrong-doing could be found, but I can only imagine that she regrets the day she joined the Texas A&M faculty. 

Let me step back for just a moment. I am in the behavioral and social sciences. When I started my first position at Oklahoma Panhandle State University, I quickly became friends with a then-junior faculty member in my department who was and probably is considerably more conservative than I am. We are still good friends although we each work in completely different locations now. One thing we shared in common was a belief that we both expressed often: social scientists are equal opportunity offenders. If we are doing our job right as educators and researchers, we're going to present evidence that will end up upsetting someone. That's not because we enjoy upsetting our intended audiences, but because facts can be inconvenient, depending on one's worldview. I also believe that since our research output can inform policymakers, we have an obligation to call them out when they are misunderstanding our work, misusing our work, or ignoring our findings at the expense of the greater social good. If these decision-makers are unhappy with our informed opinions, that's their problem. At least that's how it should be. I take the same view with students. Some course content will inevitably challenge beliefs and maybe that leads to some cognitive dissonance, or whatever. I can't just change the facts to please others. My job is to make the evidence available. What students choose to do with that information is up to them, and quite frankly, I have little interest in what they do with that information once the course is finished for the term. As a result, some semesters my course evaluations can look a bit bleak. Imagine a simple introductory course in Psychology that includes materials about research on gender that go against what a subset of students may have learned in Sunday School. I'll get flak for the simple fact that the information is in the textbook, and that I may have tested on that information. So it goes.

In mentioning all of this I do know that I am working during a difficult moment in higher education. The tendency for legislators, primarily in Republican-run states in my country, to micromanage our instruction and research is only intensifying. Texas and Florida are probably the most obvious examples, but any of us in the so-called "red states" are at risk of being cancelled. I expect things to get worse before they get better. I can hope that the tide turns back in favor of rational thought, defended with empirical evidence, and that this era of filtering all data and ideas through tribal grievances will end with minimal collateral damage to careers and to students' ability to function after college. Then again, I am well aware that hope and $1.50 might buy you a candy bar, and little else.

Saturday, July 15, 2023

America's Confidence in Higher Education is Dropping

You can read the article here. The article and the poll don't give much in the way of context for why Americans' confidence in higher education has dropped so precipitously. Apparently some measure of political party affiliation was used, so that helps a bit. Apparently, if you analyze the cross-tabs, you'll find that generally, those who identify as Democratic have had relatively higher amounts of confidence in higher education relative to those who identify as Republican or Independent, and that seems to be a consistent pattern across time. However, since 2015, confidence has dropped among all polled regardless of party affiliations. I suppose university and college administrators in blue states can take some consolation in the finding that as of 2022, more the half of Democrats were confident in higher education as an institution, but even that is a noticeable decline from 2015. 

I'll speak only anecdotally for the time being, as I don't have the time or energy to really do a deep dive into other data on the matter. I've noticed a tendency, usually political and deeply partisan, to attack colleges and universities. There was and still is a moral panic about not enough ideological conservatives being hired at institutions of higher education. I've seen that tired attack for as long as I've been an educator. I guess I don't see it at the sort of regional colleges and universities that would typically hire or at least interview me, and I've looked. There's definitely a moral panic over the content we teach in our courses, and I've seen so much fuss made about CRT and "wokeness"at colleges and universities that I'm pretty much numb to such attacks. There isn't much "woke" about means and standard deviations, folks. So it goes. I think I can understand how those who regularly rely on Fox News for their information my have changed their attitudes toward colleges and universities. I wonder if our colleges and universities are increasingly seen as not doing enough by at least some subsection of those who identify as Democratic. I'm pretty jaded about most DEI statements and offices at universities like mine. I wonder if that jadedness is shared. Then there is the ongoing problem about the increasing student loan burden that students and parents alike deal with in order to obtain degrees that, while leading to nominally middle-income careers, are not lucrative enough to pay back those loans. 

There's so much to unpack, and I think that particular article gives us only a minimal amount of information to go on. At least we know what the topline numbers are. We just don't entirely know what they mean. And we need to understand better what is going on behind those numbers in order to make sure we can as institutions defend ourselves in an increasingly difficult political and social environment.