Wednesday, March 20, 2019

Higher Ed is being starved

Most people don't get it. The land grant universities and regional universities and colleges tied to these institutions are continuing to hemorrhage state funding. The public may not be aware, but those of us working on the front lines definitely notice. My former Chancellor made it clear nearly a couple years ago that the public universities in my state were public in name only. We receive maybe a little under 30 percent of our funding from state or federal resources. The rest comes off the backs of people who often are income and food insecure themselves. Professional development requirements for faculty - and believe me, they are requirements in order to stay employed - are increasingly paid for by the faculty themselves. We expect students to be able to work full time and take a full load of classes, and somehow graduate within a four year time frame. This is not a sustainable set of circumstances. Higher education is a public good, and needs to be treated as such. I am related to people who were first generation university students. I hear their stories, and how difficult surviving through those college years could get, and yet their stories pale in comparison to some of the stories my own students could tell. I wish more folks would get it, and would wake up legislators. In the meantime, we continue to starve, and so too do our communities.

Wrong Answer

I probably shouldn't pick on Elsevier, but its journal editors and publishers often make it too easy for me. I've experienced similar feedback on manuscripts submitted to non-Elsevier journals, with similar offers to publish in a lower impact "open source" journal as long as I was willing to fork over around three or four grand. I realize this will come as quite a shock for many readers, but I actually don't have stacks under the mattress just in case I need to get manuscript published. Nor do my colleagues in regional universities (where I work) or community colleges. Nor would my institution be able to reimburse me if I could somehow front the money for publication.

I look at it this way: What the editor of this particular journal did was lay bare a genuine concern that those of us who value open source have. Simply saying "look...no paywalls" is insufficient if citizens who fund the research have to pay a for-profit company for the privilege of making it public or if underpaid faculty have to go nearly bankrupt in order to meet their professional development obligations. That is essentially the message that this particular editor is giving me. Scientific work is a public good. It should be treated as such. There are obligations those who edit and publish have to respect the trust taxpayers place in scientific endeavors every bit as much as there are obligations those of us who do scientific research must respect. There is something rotten about a system that effectively double-bills taxpayers and/or researchers. There is something equally rotten about the so-called premier journals explicitly or implicitly engaging in publication bias (favoring statistically significant novel findings over replication attempts and null findings) and relegating the more fundamental work of researchers to outlets that would be presumably unread. The older I get, the less patience I have for that sort of approach. It violates the spirit of the scientific enterprise in favor of greed.

In short, I get it: the insular worlds within which we scientists live are microcosms of our aching planet. The system itself is fundamentally broken. Too many of us feel trapped - too trapped to rebel against a system that is clearly stacked against honest researchers and the public. There are no clear rewards for those who rebel. And yet increasingly, I think we must rebel. Someone once wrote something about having nothing to lose but our chains. Maybe that person was on to something.

Saturday, March 16, 2019

A reminder of blog netiquette

My impressions about the norms governing blogging were formed back around the time I first heard of blogs, which would be right around the turn of the century. Yes, that is a long time ago. One important norm is that once a post is published, it remains unchanged. I've seen some reasonable modifications of that norm - corrections for typos within a 24 hour period are probably worth doing. Generally if serious changes need to be made to a post, either the original info is struck out so that readers can see what was first posted (as a means of transparency) or the author creates a new post and acknowledges what went sideways with the old post. One of the most disappointing episodes I've experienced in reading others' blogs was noticing that a blogger had taken a post from earlier in this decade and completely revamped it without revealing what had changed. I really should have taken screen shots of the original post and the changed post. That might have been educational in itself, as long as I could have found a way to illustrate what had happened without it coming across as a sort of "gotcha" hit piece. That all said, there are moments when I become just a bit more jaded about humanity than I already was.

My SPSP Talk in February 2019

I was quite surprised and honored to be included in a panel on the social psychology of gun ownership at the most recent SPSP conference in Portland, based on my work on the weapons effect. Although I often consider myself a flawed messenger these days, and often think of myself more as a reluctant expert on the matter of the weapons effect as a phenomenon, I was excited to attend and to see how an audience would receive my increasingly skeptical view on the topic of the weapons effect. As some who might read this blog know, my wife was injured in a freak accident just prior to Christmas, and up until the last week of February, I had been acting primarily as her caretaker (and taking care of my faculty responsibilities as well!). As a result, I had to cancel my trip.

When I broke that news to the symposium organizer, Nick Buttrick, he worked with me so that I could still in some way participate. We looked into a number of options, and settled on an audio PowerPoint slide presentation in order to at least allow me still be a part of the proceedings - even from a distance. I am grateful for that. If you are ever interested, you can find my slides archived at the Open Science Framework. Just click this link and you will have access to audio and non-audio versions.

There are probably better public speakers, but if nothing else, I do have a story to tell about the weapons effect based on the available evidence. This presentation is based in part on the meta-analysis I coauthored and published late last year, as well as on a narrative review I have in press (aimed at a much more general social science audience), and some new follow-up analyses I ran last fall. I will be giving another version of this talk to an audience of mostly community college and small university educators in the social sciences in April. I am realizing that I am probably not done with the weapons effect. There are truths in that meta-analysis database that still need to be examined, and I would not be surprised if a case could be made for an update in the next handful of years as new work becomes available.

Monday, March 11, 2019

"La lucha sigue, y sigue, y sigue..."

I thought I'd nick a line from one of several books John Ross authored on the Zapatista rebellion in Chiapas before he passed away. If you need a quick translation, "the struggle continues, and continues, and continues..." So what am I on about now? Let me give you a clip of an interesting blog post and then we'll go from there:

In these “conversations,” scholars recommending changes to the way science is conducted have been unflatteringly described as sanctimonious, despotic, authoritarian, doctrinaire, and militant, and creatively labeled with names such as shameless little bullies, assholes, McCarthyites, second stringers, methodological terrorists, fascists, Nazis, Stasi, witch hunters, reproducibility bros, data parasites, destructo-critics, replication police, self-appointed data police, destructive iconoclasts, vigilantes, accuracy fetishists, and human scum. Yes, every one of those terms has been used in public discourse, typically by eminent (i.e., senior) psychologists.

Villainizing those calling for methodological reform is ingenious, particularly if you have no compelling argument against the proposed changes*. It is a surprisingly effective, if corrosive, strategy. 

Yes, it is, at least in the short term. I haven't heard open science advocates referred to as The Spanish Inquisition just yet, but then again "nobody expects The Spanish Inquisition!"

But I digress. When a whole group of scholars and educators are characterized as Nazis or Stasi, that's bound to be some sort of a red flag. After all, I'd like to think we all agree that Nazis are bad, and that the Stasi (or KGB or any other such outfit) is not an organization we'd want to emulate. Or even using the term authoritarian is quite loaded. I study authoritarianism as part of my research program, and so that term definitely makes an impression - and definitely not in a good way. But what if the people who are being called all these names are nothing like that? If one has formed an impression of someone as being the equivalent of a member of one of these awful groups, would there even be any motivation to interact? That's more my concern: stifling conversations that appear to me we need to have.

I've noticed similar language used to describe researchers who have presented findings that run counter to popular claims that various forms of mass media influence aggression and violence. Being a skeptic in this particular corner of the research universe can get you referred to as "holocaust deniers" and "industry apologists" (among other epithets). In the short term, that might work for those who have legacies to defend. Long term? What happens when you see more and more studies citing your work primarily refuting your work? Ignoring and name-calling will only get you so far. Maybe things are not as settled as was previously thought. But once that well has been tainted, productive dialog is not exactly going to happen. And that is one hell of a shame.

Since I've seen this before, I am not surprised that calls to make the way we conduct our research more transparent end up meeting similar resistance. As someone who simply found myself my methods courses a few years ago, I can tell you that my initial reaction to the crisis in confidence (as it now encompasses a replication crisis, a measurement crisis, and a theoretical crisis) was a bit sanguine. Then I became more concerned as more evidence and commentary came in. Since I did not know the main proponents of open science personally, I decided to follow their work from a distance. Turns out I was overly cautious at first (after all, when a group gets characterized negatively...). And over time I have waded in and interacted. And I don't see Stasi, or authoritarians, or human scum, etc. What I see are mostly young-ish researchers who share a similar set of concerns about the field, who seem committed to getting it as right as is humanly possible, and who are fun to talk to. I realize that negative portrayals may make it hard for others to see things as I do. I also will not discount that there are probably some bad actors among its proponents (but isn't that true in practically any facet of human existence?).

The folks who can teach you how to detect data analysis reporting errors, how to spot possible p-hacking, and can offer some solutions that may prevent many of the problems that have plagued the psychological sciences are worth a fair hearing. Probably much more than that. Business as usual hasn't exactly been working, as the evidence continues to mount each time a classic finding gets debunked or a major work turns out to be so full of errors as to be no longer worth citing.

In the meantime, I have no illusions about the academic world. It has always been a rather contentious arena. Arguing over data or theory may or may not be fruitful. Arguing over how to build a better mousetrap probably is fruitful. In those cases, the more interaction, the better. Maybe we will end up not agreeing on much. Maybe we'll find common ground. Name-calling on the other hand is pointless, and merely betrays a lack of ideas, or at minimum a lack of confidence in one's ideas. Noting that basic fact won't stop that particular phenomenon. Comes with the territory. Best we can do is spot toxic behavior when it occurs, and try to accept it for what it is and minimize our exposure to those who genuinely are bad actors. And realize in the process that the struggle to change a field for the better is likely one that will feel endless. The struggle will continue, and continue, and continue.

Onward.

Sunday, March 10, 2019

A Current Opinion on Current Opinion in Psychology

I found the following tweet to be amusing - in the sense of being funny/not-funny:
Typo note: I think Chris Noone meant "not show any benefit!"

With that out of my system, let me offer a couple thoughts about Current Opinion in Psychology.  The journal is published by Elsevier. Its stated mission is to:

In Current Opinion in Psychology, we help the reader by providing in a systematic manner: 
  1. The views of experts on current advances in psychology in a clear and readable form. 
  2. Evaluations of the most interesting papers, annotated by experts, from the great wealth of original publications.
Sounds noble enough. In theory those who either have library access, some coin to shell out for the cost of the articles (most are hidden behind a paywall, as it costs over $4000 for authors to publish an open access article in the journal - note that this estimate is based on what similar Current Opinion journals charge), or are facile at using such alternative means of obtaining full-text copies of articles (e.g., sci-hub.tw) will be able to read short, understandable summaries of recent developments in important areas within the psychological sciences. Note that the journal charges libraries $2358 per year (plus tax) and each article costs $31.50 to download. That's a problem if the "current opinion" on offer could be counted on for accuracy. But, what if the "current opinion" is less than accurate? How does that happen? Good question.That's one for the editors in charge of the journal, the Editorial Assistant (April Nishimura), and the Associate Publisher for Life Science and Social Science (Kate Wilson) - and they aren't likely to talk.

I certainly think questions need to be asked about how guest editors get chosen in the first place. Are they recruited? Do they come up with what they think would be a brilliant idea for an issue and get a green light? How do guest editors go about selecting authors to write brief narrative reviews? What decision criteria do they rely upon? Do they simply choose their best friends? Do they look for skeptics to provide a fair and balanced treatment of the topics covered in a particular issue? What is really going on with the peer review process? I've noted my experiences before. Let's say that a 24 hour turn-around time is frightening to me, as I have no reason to believe that a reviewer could actually digest even a brief manuscript and properly scrutinize it in that time frame. How is the much-ballyhooed eVise platform actually used by the editorial team responsible for this journal? Is it actually used to properly vet manuscripts from the moment of initial submission onward? If not, why not? That becomes a critical question given how much Elsevier loves to brag about their commitment to COPE guidelines. If not, we also face ourselves with an observation made elsewhere: that all eVise does is create a glorified pdf file that any one of us could create using Adobe Acrobat. We are left wondering if any genuine quality control actually exists - at least in a way that is meaningful for those of us working in the psychological sciences. What if the process, from recruiting guest editors to vetting manuscripts, is so fundamentally flawed that much of the "current opinion" published is more akin to the death throes of theoretical perspectives moments before they are swept into the dustbin of history?

At the end of the day, I am left wondering if the psychological sciences would be better served without this particular journal, and if we could simply instead as experts blog our reviews on recent developments in our respective specialties, or offer some tweet storms instead. Heck, it would certainly save readers some time and money, and authors some headaches. That is my current opinion, if you will.