It all started with a meta-analysis...
Anyone who is familiar with the work I have published and presented already knows that I coauthored one of the first published articles establishing that the mere presence of weapons facilitates the accessibility of aggressive thoughts. The experiments that my coauthors and I conducted for that paper and in a subsequent paper were solid. As someone who was familiar with the Berkowitz and LePage (1967) experiment, the controversy surrounding that experiment, and the first meta-analysis examining the weapons effect, I came to a conclusion that probably many did at the time: that the effect was real and meaningful.
Now, I mostly teach courses for a living, and don't get to conduct a lot of research. There are tradeoffs, but generally it has not been a bad arrangement for me. But I never gave up my interest in the weapons effect, and tried to keep up with any new published research to share with my students. A few years ago, I decided that it was time to more systematically examine the state of the research on the weapons effect. Initially, I undertook this task by myself, using an occasional student to help me code studies. Utilizing the primitive software I had available, I was able to essentially replicate the Carlson et al (1990) meta-analysis and provide some preliminary support for the notion that there was a small-to-moderate average effect size for the influence of weapons on aggressive cognitive outcomes.
That was nice insofar as it went. I made some mention of what I was working on in one of my social media outlets, and out of the blue, Brad Bushman expressed interest in collaborating with me. He offered expertise in the latest techniques and software, and facilitated my getting a license to use a meta-analytic software known as CMA. I read through the manuals after downloading the software and then transferred my data files to the CMA spreadsheet. So far, so good. Suddenly, I was able to not only compute effect sizes, but also more meaningful publication bias estimates, as well as account for such moderators as age, and to examine potential decline effects. Brad wanted something a bit grander than I had intended initially, and before long, I had a rather complex database including not only behavioral and cognitive outcomes, but also affective and appraisal outcomes. By the time all was said and done, I had one heck of a database that included not only published but also non-published reports (primarily dissertations and theses). The findings seemed to support contemporary theories, and again all seemed to be well. It took some time to get an article published, but eventually there was some success.
And then I was alerted to a database error. I won't go into the details as those have been reported elsewhere, but I learned the hard way that in CMA, not all columns are created equal. After locating the source of the error, I was able to recalculate the basic analyses, and a colleague was able to recalculate some very necessary sensitivity analyses. The findings were eye-opening. We were already aware that there was some serious issues with publication bias, especially with regard to behavioral outcomes. However, the new analyses showed that publication bias was more of a problem than we initially imagined. By the time all was said and done, the corrected data analyses still allowed me to conclude that the mere presence of weapons reliably primes aggressive thoughts and hostile appraisals. However, behavioral outcomes are another matter altogether. Our initial results were already troubling, as it was difficult to triangulate around a "true" average effect size for behavioral outcomes. The problem was even more pronounced after the database error was corrected. To put it bluntly, there is way too much variability among the studies in which a behavioral outcome was measured to state with any confidence that the mere presence of weapons, even under conditions of provocation, facilitates aggressive behavioral outcomes. Nor can I state with any confidence that there is no effect. The findings are, in other words, inconclusive at best.
I can speculate about the reasons why it is so hard to estimate the average effect size of the weapons effect for aggressive behavioral outcomes. I suspect much of the issue comes down to the quality of the research. Much of the early research was conducted during the 1970s and 1980s. After Carlson et al. (1990) reported their findings, behavioral research more or less ended - the handful of exceptions duly noted. In other words, after about 1990, we dropped the ball). The behavioral research that had been conducted utilized small samples (often with n of 10 or maybe 15 per treatment condition). It does not take a genius in statistical theory to figure out that there is going to be a lot of variability from study to study on that basis alone. That should be troubling to any of us who care about this research area. Some of the field research is simply awful. I find it counter-intuitive at best that people will honk their horns at someone who appears to have a gun in their vehicle as Turner and colleagues (1975) found in one of their field experiments. The fact that these authors could not replicate the finding in a larger sample experiment is telling, as is the fact that two subsequent published and unpublished field experiments failed to replicate the initial Turner et al (1975) finding. Seriously, think about it. Who in their right mind acts in any way to provoke (in this case by honking one's horn at) someone who is already driving around with a firearm?
If someone asks me if weapons prime aggressive behavior, about all I can say is that I have no earthly way of knowing, based on the available data. I am more confident regarding cognitive and appraisal outcomes, with moderate publication bias effects duly noted. I am also confident that the effect occurs across age ranges and regardless of whether the sample includes college students or non-students. Thankfully, I am still fairly confident that this is a literature that has avoided serious decline effects. But ultimately the acid test is whether or not a stimulus can prime tangible aggressive behavioral outcomes. I am no longer convinced that the mere presence of weapons does so. I am not convinced yet that the mere presence of weapons fails to prime aggressive behavior either. The truth, based on the data, is that I just don't know. As noted earlier, the findings are inconclusive, and probably always were.
Whether or not this line of research is really worth reviving is an open question. It is conceivable that weapons may facilitate aggressive driving behavior to an extent, as Hemmenway and colleagues suggest from some cross-sectional research. Bushman (in press) presumably found support for Hemmenway's findings in a driving simulator experiment, but I am not sure I can make much of that work (it was based on samples that at n=30 per condition are still a bit small for my comfort) until I see replications from independent researchers.
Really the upshot to me is that if this line of research is worth bringing back, it needs to be done by individuals who are truly independent of Anderson, Bushman, and that particular cohort of aggression researchers. Nothing personal, but this is a badly politicized area of research and we need investigators who can view this work from a fresh perspective as they design experiments. I also hope that some large sample and multi-lab experiments are run in an attempt to replicate the old Berkowitz and LePage experiment, even if the replications are more of a conceptual nature. Those findings will be what guide me as an educator going forward. If those findings conclude that there really is not an effect, then I think we can pretty well abandon this notion once and for all. If on the other hand the findings appear to show that the weapons effect is viable, we can face another set of questions - including how meaningful that body of research is in everyday life. One conclusion I can already anticipate is that any behavioral outcomes used are mild in comparison to everyday aggression, and more importantly to violent behavior. I would not take any positive findings and recommend jumping to conclusions regarding the risk of gun violence, for example - and that jumping to such conclusions would needlessly politicize the research. That would turn me off further.
For now, the jury is out. Weapons appear to prime aggressive thoughts. Big deal. Until some well-designed behavioral research outcomes are made available, we'll have to wait before we know much more. In the meantime, social psychology textbook authors may want to revise their aggression chapters when it comes to discussing the weapons effect. If the textbook authors won't, than I will make sure to make mention of what I know in my classes.