The blog of Dr. Arlin James Benjamin, Jr., Social Psychologist
Saturday, November 16, 2024
Something I want to circle back to
Saturday, July 6, 2024
Useful Retraction Watch Article on Retracted Articles
I have discussed the problem of retracted articles continuing to receive citations before. In the process, I've mentioned some limited circumstances in which I can accept that a retracted article will continue to be cited. For the most part, my usual line of thinking is that unless the retracted article in question is being cited either to address some theoretical or methodological concern with a particular research area or is otherwise being used as a cautionary tale of what not to do (especially from an ethical standpoint), I see no point in a retracted article maintaining zombie status, and yet that occurs far too often, as many of pointed out before.
Caitlin Bakker and Maria Zalm describe the problem and offer a summary of some solutions in their recent Retraction Watch article. As chair and member, respectively, of the NISO CREC Working Group their recommendations appear to be a step in the right direction. You can read their recommendations in more detail here. The bottom line is that retractions and expressions of concern need to be far more visible than they are currently.
And the urgency for making retractions and expressions of concern much more visible is quite salient today. Articles are almost always retracted for a reason: either their was some form of fraud involved or there was some noticeable incompetence that makes the results and conclusions one might draw from the work of questionable validity (to put it politely). When such work is cited without noting its retraction status, the citing authors are wittingly or unwittingly allowing a work that is no longer considered valid to maintain its air of false legitimacy. That can lead to us as professionals continuing to believe in and spread our equivalent of urban legends. And we have to recall that policy makers and attorneys use our work quite often, and not always for the public benefit. The recent Supreme Court case in which the plaintiffs challenged the FDA approval and safety of mifepristone for non-medical pregnancy termination using a retracted fraudulent study as the cornerstone of their argument. Although the Justices thankfully ruled on the side of those who want to keep mifepristone available, that outcome was not a foregone conclusion. We can think too of the damage done from a retracted paper that supposedly linked childhood vaccines to autism. We're decades removed from that retraction, and yet that discredited work is still the foundation for what has become a political movement that has set back progress on preventing dangerous communicable diseases. Although the stakes are usually considerably lower for many retracted articles, they all leave at least a few victims in their wake. Let's hope that these new guidelines, as well as the sort of databases created by Retraction Watch and others, make a difference. The fewer zombies we have in our respective sciences, the better.
Monday, May 20, 2024
Interesting podcast on Open Science and its enemies
Now that I have a couple of moments to breathe, I've been able to spend a bit of time on Bluesky (which is where a lot of academic Twitter landed after Elon Musk took over the platform and made it far worse), and reconnect with some folks whose work I respect. I have been gathering that there might be some trouble in paradise among the community advocating for Open Science. I've seen some chatter about a preprint that offers a very broad definition of what may be considered questionable research practices that have at least some in the community suggesting that the definition really is too broad. I may come back around to that, but if and when I have the time to do more than give the preprint an initial reading. Instead, I'll focus on a podcast that I had never heard of before entitled The Error Bar. It's a clever title, and the host, Nick Holmes, certainly strikes me as witty and open-minded. He is planning a three-part series on what he refers to as Open Science and its Enemies. This week's podcast is called the p-circlers. We can think of p-circling as reverse p-hacking according to Dr. Holmes. The upshot is that p-circlers hone in on a finding they do not like and then look for ways to make the finding seem suspect or so trivial as to not even be worth examining in the first place. Holmes focuses on one finding that seemed to create a stir, and a preprint that ends up resorting to p-circling behavior in order to explain away the findings as much ado about nothing. If you have 26 minutes or so to spare, it is a worthwhile podcast episode, and hopefully will provoke some thought. Since I have some proverbial skin in the game when it comes to presenting open science practices in my undergraduate methods courses, I want to make sure that our actions really do move our respective disciplines and sub-disciplines forward, rather than simply weaponize a series of recently developed tools for post-peer-review and in turn lead us to making the same mistakes as our predecessors. Anyway, give this episode a listen. I don't think you'll be disappointed.
P.S.: If you do not like listening to podcasts, Nick Holmes also blogs his podcasts. Here is the blog post for the episode on p-circlers.
Thursday, May 16, 2024
Another moral panic debunked: The "woke" university
Judd Legum has a reasonably good explainer of the current moral panic that is targeting colleges and universities: the myth that these institutions function primarily for indoctrinating students into some sort of "woke" ideology. In a sense, this is simply a rehash of earlier moral panics targeting colleges and universities. In my day as an undergraduate student, there were still plenty of pundits who persisted to claim that universities were hotbeds of communist indoctrination - a notion I found quite amusing at the time. Later, I'd read that there was a noticeable downward trend in course offerings on Marxism and a downward trend in Marxist scholarship by the 1980s. My guess is that what really drove the moral panic of my day was student-led efforts to urge universities to divest from South Africa's Apartheid regime. Today's moral panic appears to have its origins primarily in DEI initiatives by universities and university systems starting late last decade, and more recently in student protests over the war in Gaza which is somewhat reminiscent of the anti-Apartheid protests from the 1980s and early 1990s.
As Legum notes, many of the individuals who seem to be responsible for our current moral panic over "woke" ideology in universities happen to be billionaires (often in the tech sector) and political pundits on streaming services or in our legacy media. There are the usual anecdotes to make the reader's or viewer's blood boil. Anecdotes intended to provoke outrage aside, is there any data to back up the claims that our universities are indoctrinating our students? The answer turns out to be no.
Legum mentions an organization called Open Syllabus. The data collected by Open Syllabus can be quite helpful. Obviously the data in this case come from publicly available syllabi, which could be a limitation, but it at least gives us numbers to test the claim that our universities are too "woke" to effectively educate adult learners. It turns out that there are very few course syllabi that include terms like Critical Race Theory (or race more broadly), transgender (or gender more broadly) and so on. In other words, as Legum notes, it appears possible and even probable that students, even at elite universities, can go through a four year degree program without ever encountering any of the concepts that apparently cause our far-right billionaire class and punditry class to quake in their comfy slippers. We could certainly have a conversation as to whether a lack of exposure to structural racism, the concept of gender as a social construct, etc., is beneficial or detrimental to students who are preparing for careers in which they will likely work with and supervise a diverse set of individuals. There is certainly room for debate about the necessity and effectiveness of DEI initiatives in terms of fulfilling a university's primary mission and objectives. Clamping down on DEI programs, courses of study, and student speech with scant to near-non-existent evidence is far from conducive to an environment in which ideas are freely exchanged and challenged.
The moral of the story is simple: if a claim is too outrageous to be true, it probably is. At that point, it is a really good idea to question the source or sources of the claim or claims, and do some digging to see if there is any trustworthy data to support their claim. If not, it is best to discount the claim as not valid and move on.
Sunday, May 12, 2024
The problem of retracted articles continuing to receive citations
Now that I am mostly done with grading for the semester, it is time to turn my attention to a phenomenon that has bothered me for quite a long time, and will most likely continue to bother me. It is not uncommon for articles to get retracted for any of number of reasons. Sometimes, an error gets caught that invalidates the claim made by the authors, sometimes fraud is involved, and sometimes there is plagiarism involved. It happens. Life should go on. In an ideal world, once the article is retracted, that should be the end of its useful life. That article should receive no more citations. We don't live in an ideal world. I know. I am being Captain Obvious about that. The reality, as this article in Retraction Watch points out, is far more complicated, and concerning. Retracted articles may get cited less once the retraction is made public, but they still can rack up quite a number of citations.
I can think of a handful of reasons why an article might still be cited post-retraction:
1. In the months and year immediately after the retraction, papers by citing authors may have already been accepted for publication. It is quite likely that the citing authors would have no knowledge that a retraction was in the works. These are good-faith citations, and should be treated as such. The impact the retraction has on the papers by the citing authors may or may not have significant ramifications, depending on how much of their argument was anchored by the retracted article, or depending on if the effect size data from the retracted article was included in a meta-analysis. Usually the ramifications are fairly negligible, but we shouldn't always assume that to be the case.
2. The retracted article is cited as part of an argument for why the work of a lab, a principal investigator, and/or that PI's collaborators should be viewed with a very healthy dose of skepticism. Under such circumstances, we should expect that the citing authors will explicitly label the article or articles in question as retracted.
3. The retracted article is cited as part of a debate about whether or not a specific theoretical model is still viable in the face of one or more of a theorist's articles being retracted. Although I am not certain I would want to build much of a case that a theory is debunked because the theorist was either negligent or fraudulent in at least one instance, I can appreciate how such examples can provide some context. Under such circumstances we should expect that the citing authors will explicitly label the article or articles in question as retracted.
4. The retracted article is cited as a cautionary tale of what not to do. Retracted articles can be rich case studies in their own right, and guide scientists to steer clear of the sorts of mistakes or misdeeds that can lead to a retraction. No two retractions are exactly alike, although there are some overlapping patterns. My favorite retraction narratives are ones where the authors of a botched article actively work to get the offending article retracted. That said, when citing a retracted article as a cautionary tale, the article in question will be explicitly labeled as retracted.
Beyond those examples, I can't think of a defensible reason to continue citing an article that is for all intents and purposes removed from the public record (aside from the retraction notice specifying the reason for the retraction). I suppose it is possible that a citing author has cited the retracted article before it was retracted, and continues to cite it in their own future papers out of force of habit. That is not a good look. I see that sort of behavior as a sign that authors are not keeping current with the literature in their own areas of expertise. I can also imagine situations where the citing authors cite a retracted article in bad faith in order to further a theoretical or political agenda. That is also not a good look, and the societal ramifications are very, very concerning when citing authors simply hope that their readers don't bother to notice that they are basing their argument on findings that have been retracted for good reason.
There is a simple solution for those of us who want to avoid citing retracted articles. The Google Chrome browser has a PubPeer extension that is really good at flagging retracted articles. Those who install and use that extension can be alerted to retracted articles well ahead of time, and that will give them ample opportunity to learn why the retraction happened and opportunity to find another article to cite instead. Be smart and use some simple tools that are freely available. In the process, you will help make your particular science more trustworthy. That's a good thing.
Tuesday, April 16, 2024
Postscript to the Preceding: Weapons Priming Effect and Potential Allegiance Effect
As I noted in my prior post, almost all of the experiments explicitly testing for a weapons priming effect (short term exposure to weapons increasing relative accessibility of aggressive cognition) involve the primary General Aggression Model theorists (Anderson and more recently Bushman) and/or their graduate students or associates. We are a very small group of individuals. The methods we use are strikingly similar, both in terms of independent variables and dependent variables. Over the years, we used very similar protocols when running our experiments. We used the same theoretical basis for our work. I can find one researcher citing any of our work independent of our clique who successfully replicated our findings (Korb, 2016), and she never published her Master's Thesis, as far as I am aware. With very few exceptions (e.g., Deuser, 1994), our experiments consistently found statistical significance. I often wonder if there were more independent efforts to replicate our basic findings, and if they were unsuccessful I would be curious to know what they thought happened, especially if they used protocols similar to the ones that we used. Otherwise, what we have is something of a niche area of inquiry that likely started and ended with just our cohort. I don't find that especially comforting.
Footnote: I am quite aware that Qian Zhang, who had a weapons priming effect paper retracted (Zhang et al, 2016), does still look at the weapons priming effect, and although his lab's findings on the surface are consistent with our own, I simply discard that work as I do not trust his reported descriptive and inferential statistics. Let's just say that GRIM and SPRITE tests tend to uncover mathematically impossible descriptive statistics in too many of his lab's findings. I shall leave it at that.
Sunday, March 31, 2024
The Weapons Priming Effect: A Brief History and a Word of Caution
After Carlson et al. (1990) published their meta-analysis, the weapons effect was considered an established phenomenon. Short-term exposure to weapons appeared to increase the level of aggressive behavioral responses in lab and field settings compared to short-term exposure to neutral stimuli. After 1990, there has been a dearth of research examining the effect of weapons on aggressive behavioral outcomes. Instead, there appeared to be a shift to examining the cognitive underpinnings of the weapons effect. Although there were a couple experiments in which participants were administered a Thematic Apperception Test (TAT) arguably as an attempt to assess if participants thought more aggressively when exposed to weapons (e.g., Frodi, 1975), it would not be until the 1990s until a small group of social psychologists would more explicitly assess if the mere exposure to weapons or weapon images primed aggressive cognition utilizing techniques pioneered by the Cognitive Revolution in psychology.
By the mid-1990s, some aggression researchers were using schema or associative network theories as a means to understand the impact of various aggression-inducing stimuli on aggressive cognition, affect, and behavior. Craig Anderson, for example, was already developing a theory known at the time as the General Affective Aggression Model (GAAM), shortened to General Aggression Model (GAM; Anderson & Bushman, 2002) by the start of this century. In that model, individuals store aggression-related information in the form of cognitive schemas and behavioral scripts. Exposure to stimuli theoretically believed to be associated with aggression or violence would prime these schemas or scripts, leading to an increased accessibility of aggressive thoughts, and potentially leading to an increase in aggressive behavioral responses. One interpretation of the classic Berkowitz and LePage (1967) weapons effect experiment was that those participants in the control room containing rifles were primed to think more aggressively and hence, under high levels of provocation, respond more aggressively.
The first explicit effort to test for a weapons priming effect was in the dissertation of Deuser (1994), a student of Craig Anderson. The experiments in that particular dissertation did not demonstrate a weapons priming effect at all. It did not matter whether participants were exposed to weapons or neutral stimuli. There was no evidence to support the theory that weapons would prime aggressive thoughts. The first published evidence of a weapons priming effect was Anderson et al (1996), although the weapons priming effect was more secondary to the main purpose of the article, which was to establish the General Aggression Model as a theory and to test the effects of uncomfortable heat on aggressive cognition, affect, and attitudes. The weapons priming effect was tested on those participants who were not exposed to uncomfortable heat. Participants were exposed to either weapon or neutral object images and were given a Stroop test to assess accessibility of aggressive cognition. Participants primed with a gun had more aggressive thoughts than those primed with a neutral object. The effect was fairly small, which seems to be a theme with this line of research, but definitely noticeable.
That finding by Anderson et al (1996) was promising. The next step was to examine if weapons truly semantically primed aggressive thoughts. Anderson et al (1998) conducted two experiments. This is where I and Bruce Bartholow (both graduate students at University of Missouri at the time) come in. I had already been exposed to the schema and script theories described in the article's introduction, which helped designing experiments considerably. I did a lot of the legwork to find a reasonably sensitive measure of accessibility of aggressive cognition, and eventually settled on a version of the pronunciation task, in which participants read the target word into a microphone, and the time it takes between onset of stimulus and when participants' voices are picked up on the microphone is measured in milliseconds. For our purposes, participants demonstrated relative accessibility of aggressive thoughts if they reacted faster to aggressive target words than non-aggressive target words. The only difference between our two experiments were the stimuli. In Experiment 1, the prime stimuli were weapon words versus animal words. In Experiment 2, the prime stimuli were a mix of weapon images or a mix of neutral images. We had participants go through a few blocks of trials in which participants would first see the prime and then speak into the microphone when they saw the target word. So participants would first see a weapon (word or image) or neutral (word or image) concept and then saw an aggressive or non-aggressive target word, which they pronounced into the microphone as quickly and accurately as possible. We expected participants to show the fastest reaction times when weapon primes were paired with weapon target words. In each experiment, our expectations were confirmed. The effect size for Experiment 1 was in between small and medium, and in Experiment 2 closer to that of Anderson et al (1996).
Since the publication of Anderson et al (1998), there have been several successful efforts to replicate and in some cases extend that finding. The dependent variables may differ (e.g., lexical decision task, aggressive word completion task or AWCT), and the experimental design may either be between-subjects or within-subjects, but the basic concept is the same. Bartholow et al (2005) in Experiment 2 started out as a replication and extension, and initial drafts of the manuscript included the analysis demonstrating the replication of Anderson et al (1998). That analysis was deleted in the published version. Thankfully, I kept the file with the analyses, although I don't have the original data file any more. There is a story behind the delay between when we ran our experiments and publication date, but that was more of an error by the editor, and can be chalked up to what life was like before electronic submissions of manuscripts. The effect was smaller than in any successful experiment up to that point, but still statistically significant. I think of the cognitive experiment by Lindsay and Anderson (2000) as a solid conceptual replication. Bartholow and Heinz (2006) would also offer a good faithful replication as a means of demonstrating that the concept of alcohol had a similar effect on relative accessibility of aggressive cognition. Subra et al (2010) would offer a conceptual replication of Bartholow and Heinz (2006). Bushman (2017) and Benjamin and Crosby (2019) also successfully replicated and extended the Anderson et al (1998) experiments.
This seems like a fairly rosy picture. But here is where I start to have concerns. Every experiment I have mentioned thus far involves either Anderson or Bushman or their associates or former advisees. I have blogged before about experimenter allegiance effects before in the context of understanding the discrepancy between findings Berkowitz and his various colleagues and independent researchers who failed to replicate a weapons behavioral effect, including the Buss et al (1972) direct replication attempt. One of my concerns is that we may have a similar phenomenon with our weapons priming effect research, but there has been almost no work done to independently try to replicate our findings. The Anderson et al (1998) paper is one I am still proud of, and it is cited fairly frequently even today. But there is a problem. Almost anyone who is independent of our cohort of researchers uses that paper as a means of justifying their own experiments on phenomena that are often entirely divorced from our research. There may be an allegiance effect that we are missing. Most experiments outside of the Anderson-Bushman cohort that could arguably be coded as weapons priming experiments are often secondary to the main thrust of the published papers or use dependent variables that could defensibly be used as proxies of aggressive cognition, although reasonable skeptics would certainly have questions and might challenge any judgment that the measures involved are truly proxies of aggressive cognition. At least one experiment that was purported to be a conceptual replication of Anderson et al (1998) was later retracted due to the research and data analyses being fundamentally flawed, if not outright fraudulent (see Zhang et al, 2016, which I've written about before). I've seen one successful independent replication of Bartholow et al (2005) that used a somewhat defensible dependent variable (Korb, 2016). Regrettably, publication never came to fruition.
We are left with a cohort of weapons priming effect researchers who have almost exclusively used the GAM as a theoretical model (which is itself deserving of challenge), and may have produced experiments that are unique to our particular cohort. Independent researchers might take our protocols and find something entirely different. Such researchers may find different and arguably better ways to test the same hypotheses we tested. Unfortunately, there appears to be no way to know, unless or until enough aggression researchers come forward and report their own replication attempts either as preprints or as publications. If independent researchers find something different to what my group of researchers found, I think that would be interesting and worth understanding. I would want to know what was different. It is possible and likely probable that independent researchers would consider things we would not have considered? How that would impact the overall body of weapons priming research is very much unknown. It could we be the case that there was just something unique about any of us who studied the weapons priming effect who were either Anderson or Bushman and their various students/associates and that our findings can be safely ignored moving forward. All I know is that the findings from an experiment I conducted back at the start of my doctoral career (Anderson et al, 1998, Experiment 2), are in line with the overall effect size for aggressive thoughts that I reported in a meta-analysis (Benjamin et al, 2018). With regard to that meta-analysis, I will gladly defend any coding decisions made in collaboration with my third author, even if we clearly differ in how to interpret our findings once we had ascertained that the analyses of effect sizes were sound.
One final thought. Although I was trained via the GAM theoretical model, I am not wed to it. I think a good case could be made that the weapons priming effect is little more than a cognitive response equivalent to classical conditioning, and that outside of possibly cognitive responses, there isn't a whole lot left to write about. I've said that in some other contexts, so I'll say it here. There are theorists (including philosophers) who are trying to place the whole body of weapons effect research into different theoretical frameworks. Their work is worth examining.
In the meantime, I am very concerned that my particular line of weapons priming effect research is little more than an experimental allegiance effect. I won't know more unless or until other independent researchers come forward. If their findings are consistent enough with what I and my cohort found, that's swell. We've at least established a cognitive response to what should be an aggressive-inducing stimulus. If not, aggression researchers like me need to go back to the drawing board. That is also okay. From my own perspective, I made my peace with this line of research several years ago. I started siding with the skeptics for a reason, and that was because the skeptics had the better argument on matters of theory and on the findings themselves. With regard to the weapons effect, or the weapons priming effect, I started out without a horse in that race. I will likely end my career without a horse in this race. Turns out there is nothing wrong with that.
Thursday, February 29, 2024
Personal Update
It's been a while since I've said much about my own life. Leap Day seems as good a time as any. I have been quiet in terms of blogging and social media for a bit because I have simply been enormously busy. I have taken on some extra adjunct gigs to deal with the effects of inflation, since cost of living increases are very few and far between these days, and to pay down some family medical expenses. Much of my life is spent grading and making sure course links still work. It doesn't mean I don't go to conferences (I still do) or publish (I quietly finished a chapter on authoritarianism a few weeks ago), but that is not currently my primary focus. I have plenty I would love to discuss, but the time to really put the necessary thought into those topics simply does not exist. When it does, I will post here. In the meantime, cheers.
Tuesday, February 27, 2024
The psychology of anti-vax bias
Ron Riggio has a handy primer on how to be best informed about vaccinations, the sort of information to trust, and various biases that can be problematic. A lot of his post is a good application of basic judgment and decision-making research. As a general rule, I agree with getting information from reliable sources, and that social media are rife with disinformation. That said, I did follow a number of well-respected virologists and epidemiologists in what used to be Twitter between 2020 and 2022, and found their posts quite informative. Here's the catch: they also relied on reputable sources (e.g., CDC, rigorous empirical research, etc.).