Paul Westerberg's old Replacements-era lyrics still come in handy.
Obviously there is a point to this brief post. Although I am only catching wind of this a week late, I wanted to make sure you all were aware that whatever anti-torture provisions APA had put in place in the wake of the torture scandal in which the organization played a role are in danger of being overturned by the current APA leadership. In the aftermath of the Hoffman Report, I expressed a good deal of skepticism about the APA's ability to get its act together and truly take ownership for its part in the human rights abuses committed by our government. In this case, maintaining a skeptical stance has served me well. I am saddened, actually. Back around 2007 and 2008, I included a statement on my Social Psychology Network profile urging anyone who happened to stumble upon it to boycott the APA (withholding dues is what we called it). I kept that statement in place until the campaign had run its course. It's a shame, but it appears that whatever measures necessary to fully rehabilitate itself were barely taken, and are now apparently likely to be ignored altogether. Psychology is a very multifaceted profession, and one of our missions is supposed to be to be a helping profession (broadly speaking). Taking part in any of those practices that are considered torture by the international community is the exact opposite of helping. Acting as if we need no safeguards to prevent fellow professionals from profiting from human suffering is no way to be a helping profession. I am now convinced that the APA is irreparably broken. We desperately need an umbrella organization that can represent not only scientific rigor, but also human well-being and dignity. The opportunity for the APA to be that organization has long since passed. I take no satisfaction in its continued fall from grace.
"I'm so, I'm so unsatisfied"
The blog of Dr. Arlin James Benjamin, Jr., Social Psychologist
Monday, April 30, 2018
Tuesday, April 24, 2018
Progress Report
It has been a while since I posted an update on my scholarly progress. A couple years ago I wrote about a life-altering event that really has influenced how I divide up my available time. I am thankfully less of a care-taker than I was even a year ago, which is a positive. I did have to take on some adjunct work and extra summer course work in order to pay down some medical bills. That is a negative. Much of my focus went to one specific meta-analysis which is currently in press. I've written just a bit about it before. That experience opened me up to a much more skeptical accounting of the state of the weapons effect literature than I had held previously. That's a positive.
In the meantime, let's recap what has already appeared in print since 2016:
Benjamin, A. J., Jr., & Oelke, S. E. (2016). Framing effects on attitudes toward torture. Kommunikáció, Média, Gazdaság, 13(1), 229-241.
Benjamin, A. J., Jr., & Bushman, B. J. (2016). The weapons priming effect. Current Opinion in Psychology, 12, 45-48.
Benjamin, A. J., Jr. (2016). Right-wing authoritarianism and attitudes toward torture. Social Behavior and Personality: An International Journal, 44, 881-888.
Benjamin, A. J., Jr. (2016). Aggression. In H. S. Friedman (Ed.). Encyclopedia of Mental Health (2nd ed.,Vol. 1, pp. 33-39). San Diego, CA: Academic Press.
The article published with Sara Oelke (a former student) was officially published in 2016, but neither of us knew about its publication until 2017. I had been warned that publishing in Hungarian journals was a bit less seamless than in US and western-European based outlets, so I wasn't entirely surprised. We were both just glad to see it in print, especially since Sara worked so hard for that one. I also have the meta-analysis on the weapons effect with Sven Kepes and Brad Bushman coming out later this year. It was an ordeal and I am glad to be done with it, but found it a valuable learning experience. I have two chapters that will appear in the Wiley-Blackwell Encyclopedia of Personality and Individual Differences. One deals with Type A Personalities, and the other with implicit motives. Both allow me to indulge my increased interest in promoting a more skeptical look at major topics in my field.
Most years I go to about two academic conferences. My regular one is with a fairly small organization called National Social Science Association. They hold their annual meeting in Las Vegas each spring. I am on the board for the organization and have been for some time. I've skipped one meeting since 2005 and that was the year my wife broke her hip. Otherwise, I try to make APS sponsored conventions my priority. Every other year, APS sponsors the ICPS conference (last year's was in Vienna, and next year's is in Paris). Part of the draw for going to the APS conferences is the opportunity to sit in on some talks regarding the state of psychological research and issues surrounding replication. I also sometimes run in to old friends from back in my grad school days, and that is inevitably a positive. Budget cuts are requiring me to self-fund more of these ventures, which is quite unfortunate. I'll probably continue my travels even with the budget cuts for the short term. Longer term? Realistically something will have to change.
That is the public side of what I do. Behind the scenes I am involved in a good deal of student mentoring. This year, I have mentored two students who completed a project for our annual Research Symposium. I mentored two students in 2017, and three in 2016. Some of this work will eventually be written up and submitted for publication, and some of these projects ended when the Research Symposium ended for that particular academic year. The more important point of the exercise in each case was to give students an opportunity to see a project through from conceptualization to presentation of findings, and finally the final write-up (at least for course grade purposes). Those sorts of hands-on experiences are crucial for understanding how scientific inquiry really works - good, bad, and ugly. I will certainly be doing more of the same in coming years.
I usually end up as a peer reviewer for at least an article or two each year, and for a few years I peer reviewed poster abstracts for SPSP (I am taking a break from that this year). One of the perks of peer reviewing is that I have done so for journals that are top-tier as well as ones that are often overlooked. I have learned that the peer review process seems to work about the same regardless of the journal's prestige. Whether or not I should take comfort in that is another story, I suppose. What I do take away from this set of experiences is that we as scholars should be less concerned about impact factor and more concerned about just getting ideally good work published somewhere.
In the meantime, I hope to get some manuscripts submitted over the summer as time permits. The positive aspect of working with the students who got involved in my research program is that I am sitting on some data sets that are potentially publishable, and I do have proof of concept now that an article coauthored by one of our students has been published that other projects involving my students can result in a tangible publication. Since I am at a university that focuses on teaching much more than research, I don't have a great deal of pressure to publish in the supposed premier journals in my field. For the purposes of my undergraduate students, simply having a publication to include on a CV or grad school application is good enough.
In the meantime, let's recap what has already appeared in print since 2016:
Benjamin, A. J., Jr., & Oelke, S. E. (2016). Framing effects on attitudes toward torture. Kommunikáció, Média, Gazdaság, 13(1), 229-241.
Benjamin, A. J., Jr., & Bushman, B. J. (2016). The weapons priming effect. Current Opinion in Psychology, 12, 45-48.
Benjamin, A. J., Jr. (2016). Right-wing authoritarianism and attitudes toward torture. Social Behavior and Personality: An International Journal, 44, 881-888.
Benjamin, A. J., Jr. (2016). Aggression. In H. S. Friedman (Ed.). Encyclopedia of Mental Health (2nd ed.,Vol. 1, pp. 33-39). San Diego, CA: Academic Press.
The article published with Sara Oelke (a former student) was officially published in 2016, but neither of us knew about its publication until 2017. I had been warned that publishing in Hungarian journals was a bit less seamless than in US and western-European based outlets, so I wasn't entirely surprised. We were both just glad to see it in print, especially since Sara worked so hard for that one. I also have the meta-analysis on the weapons effect with Sven Kepes and Brad Bushman coming out later this year. It was an ordeal and I am glad to be done with it, but found it a valuable learning experience. I have two chapters that will appear in the Wiley-Blackwell Encyclopedia of Personality and Individual Differences. One deals with Type A Personalities, and the other with implicit motives. Both allow me to indulge my increased interest in promoting a more skeptical look at major topics in my field.
Most years I go to about two academic conferences. My regular one is with a fairly small organization called National Social Science Association. They hold their annual meeting in Las Vegas each spring. I am on the board for the organization and have been for some time. I've skipped one meeting since 2005 and that was the year my wife broke her hip. Otherwise, I try to make APS sponsored conventions my priority. Every other year, APS sponsors the ICPS conference (last year's was in Vienna, and next year's is in Paris). Part of the draw for going to the APS conferences is the opportunity to sit in on some talks regarding the state of psychological research and issues surrounding replication. I also sometimes run in to old friends from back in my grad school days, and that is inevitably a positive. Budget cuts are requiring me to self-fund more of these ventures, which is quite unfortunate. I'll probably continue my travels even with the budget cuts for the short term. Longer term? Realistically something will have to change.
That is the public side of what I do. Behind the scenes I am involved in a good deal of student mentoring. This year, I have mentored two students who completed a project for our annual Research Symposium. I mentored two students in 2017, and three in 2016. Some of this work will eventually be written up and submitted for publication, and some of these projects ended when the Research Symposium ended for that particular academic year. The more important point of the exercise in each case was to give students an opportunity to see a project through from conceptualization to presentation of findings, and finally the final write-up (at least for course grade purposes). Those sorts of hands-on experiences are crucial for understanding how scientific inquiry really works - good, bad, and ugly. I will certainly be doing more of the same in coming years.
I usually end up as a peer reviewer for at least an article or two each year, and for a few years I peer reviewed poster abstracts for SPSP (I am taking a break from that this year). One of the perks of peer reviewing is that I have done so for journals that are top-tier as well as ones that are often overlooked. I have learned that the peer review process seems to work about the same regardless of the journal's prestige. Whether or not I should take comfort in that is another story, I suppose. What I do take away from this set of experiences is that we as scholars should be less concerned about impact factor and more concerned about just getting ideally good work published somewhere.
In the meantime, I hope to get some manuscripts submitted over the summer as time permits. The positive aspect of working with the students who got involved in my research program is that I am sitting on some data sets that are potentially publishable, and I do have proof of concept now that an article coauthored by one of our students has been published that other projects involving my students can result in a tangible publication. Since I am at a university that focuses on teaching much more than research, I don't have a great deal of pressure to publish in the supposed premier journals in my field. For the purposes of my undergraduate students, simply having a publication to include on a CV or grad school application is good enough.
Friday, April 13, 2018
Learning Styles: Another Myth Bites the Dust
During my years in higher education, I have seen fads come and I have seen them go. The concept of learning styles, as currently theorized and measured, is no exception. This article explains the situation quite well. The sad thing is that often when we are evaluated as instructors, our assessments must include evidence that we cater to all possible learning styles, even though the evidence is flimsy at best. Don't get me wrong. I see some value in mixing it up a bit if it gets students engaged in the learning process. The main takeaway is that a perceived preference for learning new information does not equate to success in learning through that modality, and assessing our own instruction, let alone the effectiveness of our courses and degree programs as if the myth were true is a fool's errand.
Thursday, April 12, 2018
Teaching Social Psychology: A Rethink
Earlier this semester, I taught an eight-week section of Social Psychology. This is an upper-level undergraduate course. After years of more or less augmenting whatever textbook I used, I decided to make a few clean breaks from the traditional approach to delivering course material.
In the introductory chapter, I spent some time introducing some issues regarding the replicability of social priming effects. In the methods chapter, I probably just ignored that chapter altogether and focused much of my attention to the importance of understanding the major research designs and the importance of replication. In the process I drew my lectures and discussions from various published articles on the replication crisis, in particular the research published by the Open Science Collaboration. I also noted that many of the effects that we study that are counter-intuitive and are subtle (i.e., have small effect sizes) are ones where we should be especially cautious and skeptical. I also made sure my students understood the importance of publication bias, as that may lead us to believe effects are real that may not be after they undergo further scrutiny.
Over the course of several chapters, we noted in class that some classic research either appears to not be holding up or is turning out to be inconclusive. Ego depletion research was noted as a prime example. I also pointed out some issues with IAT research, as well as research on the weapons effect, where at least the jury is clearly out when it comes to whether or not weapons prime aggressive behavioral outcomes in either lab or field research. Subliminal priming research got touched upon, as failures to replicate Bargh's work is now pertinent.
Going forward, I am going to hammer on these points more this summer and next fall. Hopefully the textbooks will catch up in the next year or so. But until then, I may have more students asking how much of what they read in their textbooks is actually real.
In the introductory chapter, I spent some time introducing some issues regarding the replicability of social priming effects. In the methods chapter, I probably just ignored that chapter altogether and focused much of my attention to the importance of understanding the major research designs and the importance of replication. In the process I drew my lectures and discussions from various published articles on the replication crisis, in particular the research published by the Open Science Collaboration. I also noted that many of the effects that we study that are counter-intuitive and are subtle (i.e., have small effect sizes) are ones where we should be especially cautious and skeptical. I also made sure my students understood the importance of publication bias, as that may lead us to believe effects are real that may not be after they undergo further scrutiny.
Over the course of several chapters, we noted in class that some classic research either appears to not be holding up or is turning out to be inconclusive. Ego depletion research was noted as a prime example. I also pointed out some issues with IAT research, as well as research on the weapons effect, where at least the jury is clearly out when it comes to whether or not weapons prime aggressive behavioral outcomes in either lab or field research. Subliminal priming research got touched upon, as failures to replicate Bargh's work is now pertinent.
Going forward, I am going to hammer on these points more this summer and next fall. Hopefully the textbooks will catch up in the next year or so. But until then, I may have more students asking how much of what they read in their textbooks is actually real.
Tuesday, April 10, 2018
Serious Question
Does anyone else use Bushman's 22-item word completion task to measure aggressive cognition? If so, have you looked at its internal consistency or test-retest reliability? If so, please get in touch with me.
Monday, April 9, 2018
The potential to make positive messages go viral in dark times
Although this is an old post on Corpus Callosum, I thought it worth highlighting:
As a social psychologist and especially as an educator, my work involves injecting something different into people's lives: the intellectual skills needed to think sufficiently critically as consumers (and hopefully producers) of science and as consumers of mass media. The messages I try to make go viral involve actively following the data wherever they may lead us to find closer and closer approximations of the truth, and in doing so setting us free. Even better, each of us has the power to learn these skills and use them. They require effort and discipline, but in time, any one of us can develop expertise in an area that can change others' lives for the better.
We live in troubled times, as events over the past two years have made abundantly clear. It appears at times as if we as a species are staring into the abyss of darkness once more. What ideas, what actions, might go sufficiently viral in order to instead make this world better rather than worse? Who among us will light up the darkness? Will we each do so in our own way?
As we sift through history, we see that there have been many who would have changed the course of events for the better. Sometimes, the geometry of the Universe permits this; sometimes, it impedes it. History has a lesson for us. As the Roman empire was crumbling, and the Dark Ages began, there was a great struggle among theologians. They cast aside Plato, and with him, his beloved tetrahedron, cube, octahedron, and dodecahedron. Worst of all, even the supremely elegant icosahedron was tossed back into the sea. They thought the cross would solve everything. Alas, they could only think in two dimensions. One of them dared to dissent. He carried the peculiar name Pelagius. He promoted the idea that humans are basically good, and that it is through their free choice of actions that they keep themselves good. In contrast, the predominant view at the time was that of St. Augustine, who believed that humans were fundamentally tainted by the original sin, and any good they had, came from the grace of god. The geometry of the Universe was not kind to Pelagius, although ultimately he managed to avoid the worst of fates. From Wikipedia:I saw in this passage a reminder that the concept of ideas going "viral" has been with us as a species for a long time - easily predating the current era of YouTube, Twitter, Facebook, Instagram, Tumblr, Snapchat and a plethora of social media platforms (within the context of which we usually discuss news, ideas, gossip, videos, and such going viral) and undoubtedly going well back to a time when we relied upon the oral tradition as our medium for communication. Clearly, the power structure of the Church of Pelagius' era considered his ideas viral in a negative sense, due to their subverting the prevailing dogma. And yet one could make a case that Pelagius' ideas were viral in a more positive sense, as a potential cure to the oncoming darkness, to the extent that those open to his ideas might see the path to salvation in a much more positive view of themselves and their fellow humans. Under better circumstances, Pelagius' ideas might have been more successful. I am reminded of a scene from the film I Am Legend in which the protagonist Robert Neville discusses Bob Marley with Anna:
When Alaric sacked Rome in 410, Pelagius fled to Carthage, where he came into further conflict with Augustine. His follower Coelestius was condemned by a church council there. Pelagius then fled to Jerusalem, but Augustine's followers were soon on his trail; Orosius went to Jerusalem to warn St Jerome against him. Pelagius succeeded in clearing himself at a diocesan synod in Jerusalem and a provincial one in Diospolis (Lydda ), though Augustine said that his being cleared at those councils must have been the result of Pelagius lying about his teachings. Augustine's version of Pelagius's teachings about sin and atonement were condemned as heresy at the local Council of Carthage in 417.Those are the people who told us to put away childish things. Those are the people who cast aside the icosahedron as a mere trinket. But it so doing, they brought us the Dark Ages. The online Catholic Encyclopedia contains the following commentary about Pelagius:
Meanwhile the Pelagian ideas had infected a wide area, especially around Carthage, so that Augustine and other bishops were compelled to take a resolute stand against them in sermons and private conversations.Imagine that, being infected with the notion that humans are fundamentally good. Is it some kind of virus?
He had this idea. It was kind of a virologist idea. He believed that you could cure racism and hate… literally cure it, by injecting music and love into people’s lives. When he was scheduled to perform at a peace rally, a gunman came to his house and shot him down. Two days later he walked out on that stage and sang. When they asked him why – He said, “The people, who were trying to make this world worse… are not taking a day off. How can I? Light up the darkness.”
As a social psychologist and especially as an educator, my work involves injecting something different into people's lives: the intellectual skills needed to think sufficiently critically as consumers (and hopefully producers) of science and as consumers of mass media. The messages I try to make go viral involve actively following the data wherever they may lead us to find closer and closer approximations of the truth, and in doing so setting us free. Even better, each of us has the power to learn these skills and use them. They require effort and discipline, but in time, any one of us can develop expertise in an area that can change others' lives for the better.
We live in troubled times, as events over the past two years have made abundantly clear. It appears at times as if we as a species are staring into the abyss of darkness once more. What ideas, what actions, might go sufficiently viral in order to instead make this world better rather than worse? Who among us will light up the darkness? Will we each do so in our own way?
Food for thought
Here's a blog post worth reading: The Meaningfulness of Lab-Based Aggression Research
The main take-away from the post is the need to focus more on assuring our research results replicate, that our measures of aggression are sufficiently valid (in this case meaning actually relate in some meaningful way to aggression in everyday life), and that in the meantime we be cautious about what our research is actually telling us. To me this seems fairly common-sense, but regrettably needs repeating.
The main take-away from the post is the need to focus more on assuring our research results replicate, that our measures of aggression are sufficiently valid (in this case meaning actually relate in some meaningful way to aggression in everyday life), and that in the meantime we be cautious about what our research is actually telling us. To me this seems fairly common-sense, but regrettably needs repeating.
Sunday, April 8, 2018
Another bookmark - media violence and violent crime
This chapter seems to offer a much-needed nuanced treatment of the presumed link between media violence and violent crime. I wanted to make sure I had this bookmarked a bit more publicly because this is something I would like to come back to when time permits. Right now my quick take is that the public need not panic. A causal link between media violence (broadly defined) and criminal violence (broadly defined) appears to be at best elusive.
Saturday, April 7, 2018
Blaming Violent Video Games for Mass Shootings is Very Misguided and Deceptive!
Violent video games are played all over the world but mass shootings are an American problem — here's why pic.twitter.com/2Rn26jiRgu— Tech Insider (@techinsider) April 6, 2018
Bookmarked for now
FlexibleMeasures.com: Competitive Reaction Time Task (CRTT)
When I was a graduate student, a version of this task was used in our lab. I strictly ran cognitive priming experiments, but I was certainly aware of how it was used and its data analyzed during lab meetings. This website, and the article accompanying it offer an eye-opening look at what happens when a measurement of aggressive behavior is not standardized. I want to come back to that at some point when time permits. Of course I am well aware that lack of standardization plagues other measurements of aggression as well. The old Buss Teacher-Learner measure of aggression was equally flexible, and I suspect a good deal of cherry-picking of findings occurred when that technique was more in vogue. I am completing a meta-analysis in which I can find one very obvious example, and that is only because the authors thankfully published all of the analyses they ran, even if they only emphasized the one that turned out to be statistically significant! I also want to comment on some psychometric issues I am noticing with some of our measures of aggressive cognition. I think we have reason to be seriously concerned with the quality of the measurements we use in aggression research, and I am increasingly suspicious of the findings that are published as a consequence.
When I was a graduate student, a version of this task was used in our lab. I strictly ran cognitive priming experiments, but I was certainly aware of how it was used and its data analyzed during lab meetings. This website, and the article accompanying it offer an eye-opening look at what happens when a measurement of aggressive behavior is not standardized. I want to come back to that at some point when time permits. Of course I am well aware that lack of standardization plagues other measurements of aggression as well. The old Buss Teacher-Learner measure of aggression was equally flexible, and I suspect a good deal of cherry-picking of findings occurred when that technique was more in vogue. I am completing a meta-analysis in which I can find one very obvious example, and that is only because the authors thankfully published all of the analyses they ran, even if they only emphasized the one that turned out to be statistically significant! I also want to comment on some psychometric issues I am noticing with some of our measures of aggressive cognition. I think we have reason to be seriously concerned with the quality of the measurements we use in aggression research, and I am increasingly suspicious of the findings that are published as a consequence.
The gun may pull the trigger (or not) - or how I became a weapons effect skeptic
It all started with a meta-analysis...
Anyone who is familiar with the work I have published and presented already knows that I coauthored one of the first published articles establishing that the mere presence of weapons facilitates the accessibility of aggressive thoughts. The experiments that my coauthors and I conducted for that paper and in a subsequent paper were solid. As someone who was familiar with the Berkowitz and LePage (1967) experiment, the controversy surrounding that experiment, and the first meta-analysis examining the weapons effect, I came to a conclusion that probably many did at the time: that the effect was real and meaningful.
Now, I mostly teach courses for a living, and don't get to conduct a lot of research. There are tradeoffs, but generally it has not been a bad arrangement for me. But I never gave up my interest in the weapons effect, and tried to keep up with any new published research to share with my students. A few years ago, I decided that it was time to more systematically examine the state of the research on the weapons effect. Initially, I undertook this task by myself, using an occasional student to help me code studies. Utilizing the primitive software I had available, I was able to essentially replicate the Carlson et al (1990) meta-analysis and provide some preliminary support for the notion that there was a small-to-moderate average effect size for the influence of weapons on aggressive cognitive outcomes.
That was nice insofar as it went. I made some mention of what I was working on in one of my social media outlets, and out of the blue, Brad Bushman expressed interest in collaborating with me. He offered expertise in the latest techniques and software, and facilitated my getting a license to use a meta-analytic software known as CMA. I read through the manuals after downloading the software and then transferred my data files to the CMA spreadsheet. So far, so good. Suddenly, I was able to not only compute effect sizes, but also more meaningful publication bias estimates, as well as account for such moderators as age, and to examine potential decline effects. Brad wanted something a bit grander than I had intended initially, and before long, I had a rather complex database including not only behavioral and cognitive outcomes, but also affective and appraisal outcomes. By the time all was said and done, I had one heck of a database that included not only published but also non-published reports (primarily dissertations and theses). The findings seemed to support contemporary theories, and again all seemed to be well. It took some time to get an article published, but eventually there was some success.
And then I was alerted to a database error. I won't go into the details as those have been reported elsewhere, but I learned the hard way that in CMA, not all columns are created equal. After locating the source of the error, I was able to recalculate the basic analyses, and a colleague was able to recalculate some very necessary sensitivity analyses. The findings were eye-opening. We were already aware that there was some serious issues with publication bias, especially with regard to behavioral outcomes. However, the new analyses showed that publication bias was more of a problem than we initially imagined. By the time all was said and done, the corrected data analyses still allowed me to conclude that the mere presence of weapons reliably primes aggressive thoughts and hostile appraisals. However, behavioral outcomes are another matter altogether. Our initial results were already troubling, as it was difficult to triangulate around a "true" average effect size for behavioral outcomes. The problem was even more pronounced after the database error was corrected. To put it bluntly, there is way too much variability among the studies in which a behavioral outcome was measured to state with any confidence that the mere presence of weapons, even under conditions of provocation, facilitates aggressive behavioral outcomes. Nor can I state with any confidence that there is no effect. The findings are, in other words, inconclusive at best.
I can speculate about the reasons why it is so hard to estimate the average effect size of the weapons effect for aggressive behavioral outcomes. I suspect much of the issue comes down to the quality of the research. Much of the early research was conducted during the 1970s and 1980s. After Carlson et al. (1990) reported their findings, behavioral research more or less ended - the handful of exceptions duly noted. In other words, after about 1990, we dropped the ball). The behavioral research that had been conducted utilized small samples (often with n of 10 or maybe 15 per treatment condition). It does not take a genius in statistical theory to figure out that there is going to be a lot of variability from study to study on that basis alone. That should be troubling to any of us who care about this research area. Some of the field research is simply awful. I find it counter-intuitive at best that people will honk their horns at someone who appears to have a gun in their vehicle as Turner and colleagues (1975) found in one of their field experiments. The fact that these authors could not replicate the finding in a larger sample experiment is telling, as is the fact that two subsequent published and unpublished field experiments failed to replicate the initial Turner et al (1975) finding. Seriously, think about it. Who in their right mind acts in any way to provoke (in this case by honking one's horn at) someone who is already driving around with a firearm?
If someone asks me if weapons prime aggressive behavior, about all I can say is that I have no earthly way of knowing, based on the available data. I am more confident regarding cognitive and appraisal outcomes, with moderate publication bias effects duly noted. I am also confident that the effect occurs across age ranges and regardless of whether the sample includes college students or non-students. Thankfully, I am still fairly confident that this is a literature that has avoided serious decline effects. But ultimately the acid test is whether or not a stimulus can prime tangible aggressive behavioral outcomes. I am no longer convinced that the mere presence of weapons does so. I am not convinced yet that the mere presence of weapons fails to prime aggressive behavior either. The truth, based on the data, is that I just don't know. As noted earlier, the findings are inconclusive, and probably always were.
Whether or not this line of research is really worth reviving is an open question. It is conceivable that weapons may facilitate aggressive driving behavior to an extent, as Hemmenway and colleagues suggest from some cross-sectional research. Bushman (in press) presumably found support for Hemmenway's findings in a driving simulator experiment, but I am not sure I can make much of that work (it was based on samples that at n=30 per condition are still a bit small for my comfort) until I see replications from independent researchers.
Really the upshot to me is that if this line of research is worth bringing back, it needs to be done by individuals who are truly independent of Anderson, Bushman, and that particular cohort of aggression researchers. Nothing personal, but this is a badly politicized area of research and we need investigators who can view this work from a fresh perspective as they design experiments. I also hope that some large sample and multi-lab experiments are run in an attempt to replicate the old Berkowitz and LePage experiment, even if the replications are more of a conceptual nature. Those findings will be what guide me as an educator going forward. If those findings conclude that there really is not an effect, then I think we can pretty well abandon this notion once and for all. If on the other hand the findings appear to show that the weapons effect is viable, we can face another set of questions - including how meaningful that body of research is in everyday life. One conclusion I can already anticipate is that any behavioral outcomes used are mild in comparison to everyday aggression, and more importantly to violent behavior. I would not take any positive findings and recommend jumping to conclusions regarding the risk of gun violence, for example - and that jumping to such conclusions would needlessly politicize the research. That would turn me off further.
For now, the jury is out. Weapons appear to prime aggressive thoughts. Big deal. Until some well-designed behavioral research outcomes are made available, we'll have to wait before we know much more. In the meantime, social psychology textbook authors may want to revise their aggression chapters when it comes to discussing the weapons effect. If the textbook authors won't, than I will make sure to make mention of what I know in my classes.
Anyone who is familiar with the work I have published and presented already knows that I coauthored one of the first published articles establishing that the mere presence of weapons facilitates the accessibility of aggressive thoughts. The experiments that my coauthors and I conducted for that paper and in a subsequent paper were solid. As someone who was familiar with the Berkowitz and LePage (1967) experiment, the controversy surrounding that experiment, and the first meta-analysis examining the weapons effect, I came to a conclusion that probably many did at the time: that the effect was real and meaningful.
Now, I mostly teach courses for a living, and don't get to conduct a lot of research. There are tradeoffs, but generally it has not been a bad arrangement for me. But I never gave up my interest in the weapons effect, and tried to keep up with any new published research to share with my students. A few years ago, I decided that it was time to more systematically examine the state of the research on the weapons effect. Initially, I undertook this task by myself, using an occasional student to help me code studies. Utilizing the primitive software I had available, I was able to essentially replicate the Carlson et al (1990) meta-analysis and provide some preliminary support for the notion that there was a small-to-moderate average effect size for the influence of weapons on aggressive cognitive outcomes.
That was nice insofar as it went. I made some mention of what I was working on in one of my social media outlets, and out of the blue, Brad Bushman expressed interest in collaborating with me. He offered expertise in the latest techniques and software, and facilitated my getting a license to use a meta-analytic software known as CMA. I read through the manuals after downloading the software and then transferred my data files to the CMA spreadsheet. So far, so good. Suddenly, I was able to not only compute effect sizes, but also more meaningful publication bias estimates, as well as account for such moderators as age, and to examine potential decline effects. Brad wanted something a bit grander than I had intended initially, and before long, I had a rather complex database including not only behavioral and cognitive outcomes, but also affective and appraisal outcomes. By the time all was said and done, I had one heck of a database that included not only published but also non-published reports (primarily dissertations and theses). The findings seemed to support contemporary theories, and again all seemed to be well. It took some time to get an article published, but eventually there was some success.
And then I was alerted to a database error. I won't go into the details as those have been reported elsewhere, but I learned the hard way that in CMA, not all columns are created equal. After locating the source of the error, I was able to recalculate the basic analyses, and a colleague was able to recalculate some very necessary sensitivity analyses. The findings were eye-opening. We were already aware that there was some serious issues with publication bias, especially with regard to behavioral outcomes. However, the new analyses showed that publication bias was more of a problem than we initially imagined. By the time all was said and done, the corrected data analyses still allowed me to conclude that the mere presence of weapons reliably primes aggressive thoughts and hostile appraisals. However, behavioral outcomes are another matter altogether. Our initial results were already troubling, as it was difficult to triangulate around a "true" average effect size for behavioral outcomes. The problem was even more pronounced after the database error was corrected. To put it bluntly, there is way too much variability among the studies in which a behavioral outcome was measured to state with any confidence that the mere presence of weapons, even under conditions of provocation, facilitates aggressive behavioral outcomes. Nor can I state with any confidence that there is no effect. The findings are, in other words, inconclusive at best.
I can speculate about the reasons why it is so hard to estimate the average effect size of the weapons effect for aggressive behavioral outcomes. I suspect much of the issue comes down to the quality of the research. Much of the early research was conducted during the 1970s and 1980s. After Carlson et al. (1990) reported their findings, behavioral research more or less ended - the handful of exceptions duly noted. In other words, after about 1990, we dropped the ball). The behavioral research that had been conducted utilized small samples (often with n of 10 or maybe 15 per treatment condition). It does not take a genius in statistical theory to figure out that there is going to be a lot of variability from study to study on that basis alone. That should be troubling to any of us who care about this research area. Some of the field research is simply awful. I find it counter-intuitive at best that people will honk their horns at someone who appears to have a gun in their vehicle as Turner and colleagues (1975) found in one of their field experiments. The fact that these authors could not replicate the finding in a larger sample experiment is telling, as is the fact that two subsequent published and unpublished field experiments failed to replicate the initial Turner et al (1975) finding. Seriously, think about it. Who in their right mind acts in any way to provoke (in this case by honking one's horn at) someone who is already driving around with a firearm?
If someone asks me if weapons prime aggressive behavior, about all I can say is that I have no earthly way of knowing, based on the available data. I am more confident regarding cognitive and appraisal outcomes, with moderate publication bias effects duly noted. I am also confident that the effect occurs across age ranges and regardless of whether the sample includes college students or non-students. Thankfully, I am still fairly confident that this is a literature that has avoided serious decline effects. But ultimately the acid test is whether or not a stimulus can prime tangible aggressive behavioral outcomes. I am no longer convinced that the mere presence of weapons does so. I am not convinced yet that the mere presence of weapons fails to prime aggressive behavior either. The truth, based on the data, is that I just don't know. As noted earlier, the findings are inconclusive, and probably always were.
Whether or not this line of research is really worth reviving is an open question. It is conceivable that weapons may facilitate aggressive driving behavior to an extent, as Hemmenway and colleagues suggest from some cross-sectional research. Bushman (in press) presumably found support for Hemmenway's findings in a driving simulator experiment, but I am not sure I can make much of that work (it was based on samples that at n=30 per condition are still a bit small for my comfort) until I see replications from independent researchers.
Really the upshot to me is that if this line of research is worth bringing back, it needs to be done by individuals who are truly independent of Anderson, Bushman, and that particular cohort of aggression researchers. Nothing personal, but this is a badly politicized area of research and we need investigators who can view this work from a fresh perspective as they design experiments. I also hope that some large sample and multi-lab experiments are run in an attempt to replicate the old Berkowitz and LePage experiment, even if the replications are more of a conceptual nature. Those findings will be what guide me as an educator going forward. If those findings conclude that there really is not an effect, then I think we can pretty well abandon this notion once and for all. If on the other hand the findings appear to show that the weapons effect is viable, we can face another set of questions - including how meaningful that body of research is in everyday life. One conclusion I can already anticipate is that any behavioral outcomes used are mild in comparison to everyday aggression, and more importantly to violent behavior. I would not take any positive findings and recommend jumping to conclusions regarding the risk of gun violence, for example - and that jumping to such conclusions would needlessly politicize the research. That would turn me off further.
For now, the jury is out. Weapons appear to prime aggressive thoughts. Big deal. Until some well-designed behavioral research outcomes are made available, we'll have to wait before we know much more. In the meantime, social psychology textbook authors may want to revise their aggression chapters when it comes to discussing the weapons effect. If the textbook authors won't, than I will make sure to make mention of what I know in my classes.
Subscribe to:
Posts (Atom)