This is just intended to be a quick fly-by post, inspired by a talk an anthropologist friend of mine gave at a local bookstore over the weekend. In discussing the tourist industry centered on the frontier mythology that dominates my community, he noted that there were social facts that often hid historical facts. By the way, countering the prevailing narrative is not an easy task, and a great way to make a few enemies along the way.
Anyway, his presentation got me thinking a good deal about how we go about teaching the psychological sciences, and more specifically social psychology. Any of us who have ever taken an introductory psychology course will inevitably read and be lectured on the story of Kitty Genovese, who was murdered in Queens in the early 1960s. I won't repeat the story here, but I will note that what we often see portrayed in textbooks is not quite what actually happened. The coverage spawned research on the Bystander Effect, which may or may not be replicable. As a narrative, Genevese's murder has been used by social conservatives as an example of the breakdown of traditional moral values in modern society (part of the subtext is that Ms. Genovese was a lesbian), and by social psychologists to further their narrative of the potentially overwhelming power of the situation. Policymakers have used the early Bystander Effect research based upon the myth of Kitty Genovese to pass "Good Samaritan" laws. The Bystander Effect and the myth of Kitty Genovese that spawned that research has been monetized by ABC in the reality series, What Would You Do. There is power and profit to be had by maintaining the narrative while burying the historical facts. After several decades, the damage is done. The APA's coverage of the murder of Kitty Genovese effectively debunked the myth a decade ago. And yet it still persists. If more non-replications of the Bystander Effect across nations and cultures are reported, that is great for science - to the extent that truth is great for science. However, my guess is that the debunked classic work will remain part of the narrative, shared on social media, and in textbooks for the foreseeable future.
In my own little corner of research, there is a sort of social narrative that has taken hold regarding various stimuli that are supposed to influence aggressive and violent behavior. It is taken as a given that media violence is causally related to aggression and even violence, even though skeptics have successfully countered the narrative with ample data to the contrary. Even something as superficial as short-term exposure to a gun or a knife is supposed to lead to aggressive behavior. The classic Berkowitz and LePage (1967) experiment is portrayed in social psychology textbooks as a prime example of how guns can trigger aggressive behavioral responses. Now lab experiments like the one Berkowitz and LePage conducted are often very artificial, and hence hard to believe are real. But what if I were to tell you that some researchers went out into the field and found the same effect? You'd be dazzled, right? Turner and his colleagues (1975) ran a series of field experiments that involved drivers blocking other drivers at intersections. They measured whether or not a horn was honked as their measure of aggression. After all, a horn honk is loud and annoying, and often in urban environments is used as the next best thing to screaming at other drivers. At least that is the thinking. Sometimes the driver (a member of the research team) drove a vehicle with a rifle on a gun rack. Other times the driver did not. The story goes that Turner and colleagues found that when the gun was present, the blocked drivers honked their horns. Case closed. The problem was, Turner and colleagues never actually found what we present in textbooks. Except for a possible subsample - males driving late model vehicles - for most drivers the sight of a firearm actually suppressed horn honking! That actually makes sense. If you are behind some jerk at a green light who has a firearm visibly displayed, honking at them is a great way to become a winner of a Darwin Award! What was actually happening then, is that with the possible exception of privileged males, drivers tended to make the correct assessment that there was a potential threat in their vicinity and that they should act cautiously. For the record, as far as I am aware, after the Turner and colleagues report was published, no one has been able to find support that the mere presence of a gun elicits horn honking. And yet the false social narrative continues to perpetuate. Who benefits? I honestly don't know. What I do know that I see the narrative of the weapons effect (or weapons priming effect) used by those who want to advocate for stricter gun laws - a position I tend to agree with, although the weapons effect as a body of research is probably very ineffective as a means of building an argument. Those who benefit from censoring mass media may benefit again from the power they accumulate. Heck, enough lobbying got rid of gun emojis on iPhones a few years ago, even though there is scant evidence that such emojis have any real impact on real world aggression, let alone violence.
Finally, I am reminded of something from my youth. When I was a teen, the PBS series Cosmos was aired. I had read Sagan's book The Cosmic Connection just prior, and of course was dazzled by the series, and eventually by the book. I still think it is worth reading, though with a caveat. Sagan tells the story of Hypatia, a scientist that probably any contemporary girl would want to look up to, and her demise. As the story goes, Hypatia was the victim of a gruesome assassination incited by a Bishop in Alexandria (now in modern day Egypt), and that the religious extremists of the time subsequently burned down the Library of Alexandria. Eventually I would do some further reading and realize that Hypatia's apparent assassination was much more complex of a story than the one Sagan told, and that the demise of the Library of Alexandria (which was truly a state-of-the-art research center for its time) was one that occurred over the course of centuries. Sagan's tale was one of how mindless fanaticism destroyed knowledge. It's a narrative that I am quite sympathetic toward. And yet, the tale is probably not quite accurate. The details surrounding Hypatia's murder are still debated by historians. The Library's demise is one that can be attributed to multiple causes, including government neglect, as well as the ravages of several wars.
Popular social narratives may play on confirmation bias - a phenomenon any of us is prone to experiencing - but the historical or empirical record may tell another story altogether. If the lessons from a story seem too good to be true, they probably are. A healthy dose of skepticism is advised, even if not particularly popular. In the behavioral and social sciences, we are supposed to be working toward finding approximations of the truth. We are not myth makers and story tellers. To the extent that we accept and perpetuate myths, we are doing little more than science fiction. If that is all we have to offer, we do not deserve the public's trust. I think we can do better, and often really do better.
The blog of Dr. Arlin James Benjamin, Jr., Social Psychologist
Monday, April 1, 2019
Wednesday, March 20, 2019
Higher Ed is being starved
Most people don't get it. The land grant universities and regional universities and colleges tied to these institutions are continuing to hemorrhage state funding. The public may not be aware, but those of us working on the front lines definitely notice. My former Chancellor made it clear nearly a couple years ago that the public universities in my state were public in name only. We receive maybe a little under 30 percent of our funding from state or federal resources. The rest comes off the backs of people who often are income and food insecure themselves. Professional development requirements for faculty - and believe me, they are requirements in order to stay employed - are increasingly paid for by the faculty themselves. We expect students to be able to work full time and take a full load of classes, and somehow graduate within a four year time frame. This is not a sustainable set of circumstances. Higher education is a public good, and needs to be treated as such. I am related to people who were first generation university students. I hear their stories, and how difficult surviving through those college years could get, and yet their stories pale in comparison to some of the stories my own students could tell. I wish more folks would get it, and would wake up legislators. In the meantime, we continue to starve, and so too do our communities.
Wrong Answer
I probably shouldn't pick on Elsevier, but its journal editors and publishers often make it too easy for me. I've experienced similar feedback on manuscripts submitted to non-Elsevier journals, with similar offers to publish in a lower impact "open source" journal as long as I was willing to fork over around three or four grand. I realize this will come as quite a shock for many readers, but I actually don't have stacks under the mattress just in case I need to get manuscript published. Nor do my colleagues in regional universities (where I work) or community colleges. Nor would my institution be able to reimburse me if I could somehow front the money for publication.Elsevier editor Spada acknowledging that null results are not even considered for Addictive Behaviors, seemingly not realizing how problematic that is. Offering a lower prestige alternative journal doesn't make that right. pic.twitter.com/KN6DDkKilh— Rink Hoekstra (@RinkHoekstra) March 19, 2019
I look at it this way: What the editor of this particular journal did was lay bare a genuine concern that those of us who value open source have. Simply saying "look...no paywalls" is insufficient if citizens who fund the research have to pay a for-profit company for the privilege of making it public or if underpaid faculty have to go nearly bankrupt in order to meet their professional development obligations. That is essentially the message that this particular editor is giving me. Scientific work is a public good. It should be treated as such. There are obligations those who edit and publish have to respect the trust taxpayers place in scientific endeavors every bit as much as there are obligations those of us who do scientific research must respect. There is something rotten about a system that effectively double-bills taxpayers and/or researchers. There is something equally rotten about the so-called premier journals explicitly or implicitly engaging in publication bias (favoring statistically significant novel findings over replication attempts and null findings) and relegating the more fundamental work of researchers to outlets that would be presumably unread. The older I get, the less patience I have for that sort of approach. It violates the spirit of the scientific enterprise in favor of greed.
In short, I get it: the insular worlds within which we scientists live are microcosms of our aching planet. The system itself is fundamentally broken. Too many of us feel trapped - too trapped to rebel against a system that is clearly stacked against honest researchers and the public. There are no clear rewards for those who rebel. And yet increasingly, I think we must rebel. Someone once wrote something about having nothing to lose but our chains. Maybe that person was on to something.
Saturday, March 16, 2019
A reminder of blog netiquette
My impressions about the norms governing blogging were formed back around the time I first heard of blogs, which would be right around the turn of the century. Yes, that is a long time ago. One important norm is that once a post is published, it remains unchanged. I've seen some reasonable modifications of that norm - corrections for typos within a 24 hour period are probably worth doing. Generally if serious changes need to be made to a post, either the original info is struck out so that readers can see what was first posted (as a means of transparency) or the author creates a new post and acknowledges what went sideways with the old post. One of the most disappointing episodes I've experienced in reading others' blogs was noticing that a blogger had taken a post from earlier in this decade and completely revamped it without revealing what had changed. I really should have taken screen shots of the original post and the changed post. That might have been educational in itself, as long as I could have found a way to illustrate what had happened without it coming across as a sort of "gotcha" hit piece. That all said, there are moments when I become just a bit more jaded about humanity than I already was.
My SPSP Talk in February 2019
I was quite surprised and honored to be included in a panel on the social psychology of gun ownership at the most recent SPSP conference in Portland, based on my work on the weapons effect. Although I often consider myself a flawed messenger these days, and often think of myself more as a reluctant expert on the matter of the weapons effect as a phenomenon, I was excited to attend and to see how an audience would receive my increasingly skeptical view on the topic of the weapons effect. As some who might read this blog know, my wife was injured in a freak accident just prior to Christmas, and up until the last week of February, I had been acting primarily as her caretaker (and taking care of my faculty responsibilities as well!). As a result, I had to cancel my trip.
When I broke that news to the symposium organizer, Nick Buttrick, he worked with me so that I could still in some way participate. We looked into a number of options, and settled on an audio PowerPoint slide presentation in order to at least allow me still be a part of the proceedings - even from a distance. I am grateful for that. If you are ever interested, you can find my slides archived at the Open Science Framework. Just click this link and you will have access to audio and non-audio versions.
There are probably better public speakers, but if nothing else, I do have a story to tell about the weapons effect based on the available evidence. This presentation is based in part on the meta-analysis I coauthored and published late last year, as well as on a narrative review I have in press (aimed at a much more general social science audience), and some new follow-up analyses I ran last fall. I will be giving another version of this talk to an audience of mostly community college and small university educators in the social sciences in April. I am realizing that I am probably not done with the weapons effect. There are truths in that meta-analysis database that still need to be examined, and I would not be surprised if a case could be made for an update in the next handful of years as new work becomes available.
When I broke that news to the symposium organizer, Nick Buttrick, he worked with me so that I could still in some way participate. We looked into a number of options, and settled on an audio PowerPoint slide presentation in order to at least allow me still be a part of the proceedings - even from a distance. I am grateful for that. If you are ever interested, you can find my slides archived at the Open Science Framework. Just click this link and you will have access to audio and non-audio versions.
There are probably better public speakers, but if nothing else, I do have a story to tell about the weapons effect based on the available evidence. This presentation is based in part on the meta-analysis I coauthored and published late last year, as well as on a narrative review I have in press (aimed at a much more general social science audience), and some new follow-up analyses I ran last fall. I will be giving another version of this talk to an audience of mostly community college and small university educators in the social sciences in April. I am realizing that I am probably not done with the weapons effect. There are truths in that meta-analysis database that still need to be examined, and I would not be surprised if a case could be made for an update in the next handful of years as new work becomes available.
Monday, March 11, 2019
"La lucha sigue, y sigue, y sigue..."
I thought I'd nick a line from one of several books John Ross authored on the Zapatista rebellion in Chiapas before he passed away. If you need a quick translation, "the struggle continues, and continues, and continues..." So what am I on about now? Let me give you a clip of an interesting blog post and then we'll go from there:
Yes, it is, at least in the short term. I haven't heard open science advocates referred to as The Spanish Inquisition just yet, but then again "nobody expects The Spanish Inquisition!"
But I digress. When a whole group of scholars and educators are characterized as Nazis or Stasi, that's bound to be some sort of a red flag. After all, I'd like to think we all agree that Nazis are bad, and that the Stasi (or KGB or any other such outfit) is not an organization we'd want to emulate. Or even using the term authoritarian is quite loaded. I study authoritarianism as part of my research program, and so that term definitely makes an impression - and definitely not in a good way. But what if the people who are being called all these names are nothing like that? If one has formed an impression of someone as being the equivalent of a member of one of these awful groups, would there even be any motivation to interact? That's more my concern: stifling conversations that appear to me we need to have.
I've noticed similar language used to describe researchers who have presented findings that run counter to popular claims that various forms of mass media influence aggression and violence. Being a skeptic in this particular corner of the research universe can get you referred to as "holocaust deniers" and "industry apologists" (among other epithets). In the short term, that might work for those who have legacies to defend. Long term? What happens when you see more and more studies citing your work primarily refuting your work? Ignoring and name-calling will only get you so far. Maybe things are not as settled as was previously thought. But once that well has been tainted, productive dialog is not exactly going to happen. And that is one hell of a shame.
Since I've seen this before, I am not surprised that calls to make the way we conduct our research more transparent end up meeting similar resistance. As someone who simply found myself my methods courses a few years ago, I can tell you that my initial reaction to the crisis in confidence (as it now encompasses a replication crisis, a measurement crisis, and a theoretical crisis) was a bit sanguine. Then I became more concerned as more evidence and commentary came in. Since I did not know the main proponents of open science personally, I decided to follow their work from a distance. Turns out I was overly cautious at first (after all, when a group gets characterized negatively...). And over time I have waded in and interacted. And I don't see Stasi, or authoritarians, or human scum, etc. What I see are mostly young-ish researchers who share a similar set of concerns about the field, who seem committed to getting it as right as is humanly possible, and who are fun to talk to. I realize that negative portrayals may make it hard for others to see things as I do. I also will not discount that there are probably some bad actors among its proponents (but isn't that true in practically any facet of human existence?).
The folks who can teach you how to detect data analysis reporting errors, how to spot possible p-hacking, and can offer some solutions that may prevent many of the problems that have plagued the psychological sciences are worth a fair hearing. Probably much more than that. Business as usual hasn't exactly been working, as the evidence continues to mount each time a classic finding gets debunked or a major work turns out to be so full of errors as to be no longer worth citing.
In the meantime, I have no illusions about the academic world. It has always been a rather contentious arena. Arguing over data or theory may or may not be fruitful. Arguing over how to build a better mousetrap probably is fruitful. In those cases, the more interaction, the better. Maybe we will end up not agreeing on much. Maybe we'll find common ground. Name-calling on the other hand is pointless, and merely betrays a lack of ideas, or at minimum a lack of confidence in one's ideas. Noting that basic fact won't stop that particular phenomenon. Comes with the territory. Best we can do is spot toxic behavior when it occurs, and try to accept it for what it is and minimize our exposure to those who genuinely are bad actors. And realize in the process that the struggle to change a field for the better is likely one that will feel endless. The struggle will continue, and continue, and continue.
Onward.
In these “conversations,” scholars recommending changes to the way science is conducted have been unflatteringly described as sanctimonious, despotic, authoritarian, doctrinaire, and militant, and creatively labeled with names such as shameless little bullies, assholes, McCarthyites, second stringers, methodological terrorists, fascists, Nazis, Stasi, witch hunters, reproducibility bros, data parasites, destructo-critics, replication police, self-appointed data police, destructive iconoclasts, vigilantes, accuracy fetishists, and human scum. Yes, every one of those terms has been used in public discourse, typically by eminent (i.e., senior) psychologists.
Villainizing those calling for methodological reform is ingenious, particularly if you have no compelling argument against the proposed changes*. It is a surprisingly effective, if corrosive, strategy.
Yes, it is, at least in the short term. I haven't heard open science advocates referred to as The Spanish Inquisition just yet, but then again "nobody expects The Spanish Inquisition!"
But I digress. When a whole group of scholars and educators are characterized as Nazis or Stasi, that's bound to be some sort of a red flag. After all, I'd like to think we all agree that Nazis are bad, and that the Stasi (or KGB or any other such outfit) is not an organization we'd want to emulate. Or even using the term authoritarian is quite loaded. I study authoritarianism as part of my research program, and so that term definitely makes an impression - and definitely not in a good way. But what if the people who are being called all these names are nothing like that? If one has formed an impression of someone as being the equivalent of a member of one of these awful groups, would there even be any motivation to interact? That's more my concern: stifling conversations that appear to me we need to have.
I've noticed similar language used to describe researchers who have presented findings that run counter to popular claims that various forms of mass media influence aggression and violence. Being a skeptic in this particular corner of the research universe can get you referred to as "holocaust deniers" and "industry apologists" (among other epithets). In the short term, that might work for those who have legacies to defend. Long term? What happens when you see more and more studies citing your work primarily refuting your work? Ignoring and name-calling will only get you so far. Maybe things are not as settled as was previously thought. But once that well has been tainted, productive dialog is not exactly going to happen. And that is one hell of a shame.
Since I've seen this before, I am not surprised that calls to make the way we conduct our research more transparent end up meeting similar resistance. As someone who simply found myself my methods courses a few years ago, I can tell you that my initial reaction to the crisis in confidence (as it now encompasses a replication crisis, a measurement crisis, and a theoretical crisis) was a bit sanguine. Then I became more concerned as more evidence and commentary came in. Since I did not know the main proponents of open science personally, I decided to follow their work from a distance. Turns out I was overly cautious at first (after all, when a group gets characterized negatively...). And over time I have waded in and interacted. And I don't see Stasi, or authoritarians, or human scum, etc. What I see are mostly young-ish researchers who share a similar set of concerns about the field, who seem committed to getting it as right as is humanly possible, and who are fun to talk to. I realize that negative portrayals may make it hard for others to see things as I do. I also will not discount that there are probably some bad actors among its proponents (but isn't that true in practically any facet of human existence?).
The folks who can teach you how to detect data analysis reporting errors, how to spot possible p-hacking, and can offer some solutions that may prevent many of the problems that have plagued the psychological sciences are worth a fair hearing. Probably much more than that. Business as usual hasn't exactly been working, as the evidence continues to mount each time a classic finding gets debunked or a major work turns out to be so full of errors as to be no longer worth citing.
In the meantime, I have no illusions about the academic world. It has always been a rather contentious arena. Arguing over data or theory may or may not be fruitful. Arguing over how to build a better mousetrap probably is fruitful. In those cases, the more interaction, the better. Maybe we will end up not agreeing on much. Maybe we'll find common ground. Name-calling on the other hand is pointless, and merely betrays a lack of ideas, or at minimum a lack of confidence in one's ideas. Noting that basic fact won't stop that particular phenomenon. Comes with the territory. Best we can do is spot toxic behavior when it occurs, and try to accept it for what it is and minimize our exposure to those who genuinely are bad actors. And realize in the process that the struggle to change a field for the better is likely one that will feel endless. The struggle will continue, and continue, and continue.
Onward.
Sunday, March 10, 2019
A Current Opinion on Current Opinion in Psychology
I found the following tweet to be amusing - in the sense of being funny/not-funny:
With that out of my system, let me offer a couple thoughts about Current Opinion in Psychology. The journal is published by Elsevier. Its stated mission is to:
I certainly think questions need to be asked about how guest editors get chosen in the first place. Are they recruited? Do they come up with what they think would be a brilliant idea for an issue and get a green light? How do guest editors go about selecting authors to write brief narrative reviews? What decision criteria do they rely upon? Do they simply choose their best friends? Do they look for skeptics to provide a fair and balanced treatment of the topics covered in a particular issue? What is really going on with the peer review process? I've noted my experiences before. Let's say that a 24 hour turn-around time is frightening to me, as I have no reason to believe that a reviewer could actually digest even a brief manuscript and properly scrutinize it in that time frame. How is the much-ballyhooed eVise platform actually used by the editorial team responsible for this journal? Is it actually used to properly vet manuscripts from the moment of initial submission onward? If not, why not? That becomes a critical question given how much Elsevier loves to brag about their commitment to COPE guidelines. If not, we also face ourselves with an observation made elsewhere: that all eVise does is create a glorified pdf file that any one of us could create using Adobe Acrobat. We are left wondering if any genuine quality control actually exists - at least in a way that is meaningful for those of us working in the psychological sciences. What if the process, from recruiting guest editors to vetting manuscripts, is so fundamentally flawed that much of the "current opinion" published is more akin to the death throes of theoretical perspectives moments before they are swept into the dustbin of history?
At the end of the day, I am left wondering if the psychological sciences would be better served without this particular journal, and if we could simply instead as experts blog our reviews on recent developments in our respective specialties, or offer some tweet storms instead. Heck, it would certainly save readers some time and money, and authors some headaches. That is my current opinion, if you will.
Typo note: I think Chris Noone meant "not show any benefit!"There's a special issue of Current Opinion in Psychology focused on Mindfulness. I was amused to see a paper of mine cited in a review which is heralding the use of mindfulness apps as an effective way of improving mental health - because our results did now show any benefit! pic.twitter.com/8NoOqqgZ83— Chris Noone (@Chris_Noone_) February 11, 2019
With that out of my system, let me offer a couple thoughts about Current Opinion in Psychology. The journal is published by Elsevier. Its stated mission is to:
In Current Opinion in Psychology, we help the reader by providing in a systematic manner:
The views of experts on current advances in psychology in a clear and readable form.
Evaluations of the most interesting papers, annotated by experts, from the great wealth of original publications.
I certainly think questions need to be asked about how guest editors get chosen in the first place. Are they recruited? Do they come up with what they think would be a brilliant idea for an issue and get a green light? How do guest editors go about selecting authors to write brief narrative reviews? What decision criteria do they rely upon? Do they simply choose their best friends? Do they look for skeptics to provide a fair and balanced treatment of the topics covered in a particular issue? What is really going on with the peer review process? I've noted my experiences before. Let's say that a 24 hour turn-around time is frightening to me, as I have no reason to believe that a reviewer could actually digest even a brief manuscript and properly scrutinize it in that time frame. How is the much-ballyhooed eVise platform actually used by the editorial team responsible for this journal? Is it actually used to properly vet manuscripts from the moment of initial submission onward? If not, why not? That becomes a critical question given how much Elsevier loves to brag about their commitment to COPE guidelines. If not, we also face ourselves with an observation made elsewhere: that all eVise does is create a glorified pdf file that any one of us could create using Adobe Acrobat. We are left wondering if any genuine quality control actually exists - at least in a way that is meaningful for those of us working in the psychological sciences. What if the process, from recruiting guest editors to vetting manuscripts, is so fundamentally flawed that much of the "current opinion" published is more akin to the death throes of theoretical perspectives moments before they are swept into the dustbin of history?
At the end of the day, I am left wondering if the psychological sciences would be better served without this particular journal, and if we could simply instead as experts blog our reviews on recent developments in our respective specialties, or offer some tweet storms instead. Heck, it would certainly save readers some time and money, and authors some headaches. That is my current opinion, if you will.
Subscribe to:
Posts (Atom)