Friday, September 14, 2018

Reforming Psychology: Who Are These People?

Let's continue just a little bit from my last post. Right now I am merely thinking out loud, so take at least some of this with a few grains of salt. The Chronicle article I linked to in that earlier report was quite adept at finding some of the more extreme statements and magnifying them, as well as at times proving to be factually incorrect (Bem's infamous ESP article in JPSP was published in 2011, not 2010!). That makes for clicks, and presumably ad revenue, but may not exactly shed light on who the stakeholders are.

Among the reformers, I suspect that this is a varied group, representing multiple specialties, and at various levels of prominence within the academic world. Some are grad students who probably have the idealism and zeal I once experienced when I was a grad student, and who like me are legitimately frustrated by their relative lack of power to change a status quo that leaves a lot to be desired. Others are post-docs and early career researchers whose fates hang in the balance based on evaluations by some of the very people whose work they may be criticizing. Hiring decisions and tenure decisions are certainly a consideration. Others may be primarily educators, but who also could be caught in the cross-hairs of those who have considerably more prestige. For those of us who are a bit less prominent, it is easier for those used to getting their way to fling unfounded accusations at us, knowing full well that for now they will be taken at face value in the public sphere. At least in these early moments, the effort to reform psychological science appears to be a high-risk enterprise.

There may be a great deal of diversity in terms of how to go about reform. Going with my generally cautious nature, I might want to tread cautiously - test drive various approaches to making our work more transparent and see what works and what doesn't. Others may want a more immediate payoff. Some of us may disagree on methodological and statistical practices. The impression I get is that regardless of where the reformers stand, there is a consensus that the status quo no longer works, and that the system needs to be changed. The other impression I get is that there is a passion for science in all of its messiness. These are not people with vendettas, but rather people who want to do work that matters, that gets at closer approximations of the truth. If someone's work gets criticized, it has nothing to do with some need to take down someone famous, but to get at what is real or not real about the foundations underlying their claims in specific works. I wish this were understood better. For the educators among reformers, we just want to know that what we teach our undergrads actually is reality-based. We may want to develop and/or find guidance in how to teach open science to research methods students, or to show how a classic study was debunked in our content courses. Of course keep in mind that I am basing these observations on a relatively small handful of interactions over the last few months in particular. Certainly I have not done any systematic data collection, nor am I aware of much of any. I do think it is useful to realize that SIPS is evenly split between men and women in its membership, and really does have a diverse representation as far as career levels (although I think more toward early career), specialties, and teaching load. I think it is also useful to realize that SIPS is likely only one part of a broader cohort of reformers, and so any article discussing reforms to psychological science needs to take that into account.

As for those defending the status quo. I suspect there is also a great deal of variation. That said, the loudest voices are clearly mid and late career scholars, many of whom perceive having a great deal to lose. There has to be some existential crisis that occurs when one realizes that the body of work making up a substantial portion of one's career was all apparently for nothing. I am under the impression that at least a subset have achieved a good deal of prestige, have leveraged that prominence to amass profits from book deals, speaking engagements, etc. and that efforts to debunk their work could be seen as a threat to all the trappings of what they might consider success. Hence, the temptation to occasionally lob phrases like "methodological terrorists" at the data sleuths among the reformers. As an outsider looking in to the upper echelons of the academic world, my impression is that most of the status quo folks are generally decent, well-intentioned folks, who have grown accustom to a certain way of doing things and benefit from that status quo. I wish I could tell the most worried among them that their worries about a relatively new reform movement are unfounded. I know I would not be listened to. I have a bit of personal experience in that regard. Scholars scrutinizing data sets are not "out to get you" but are interested in making sure that what you claimed in published reports checks out. I suspect that argument will fall on deaf ears.

I'd also like to add something else: I don't really think that psychology is any meaner now than it was when I started out as a grad student in the 1990s. I have certainly witnessed rather contentious debates and conversations at presentation sessions, have been told in no uncertain terms that my own posters were bullshit (I usually would try to engage those folks a bit, out of curiosity more than anything else), and have seen the work of early scholars ripped to shreds. What has changed is the technology. The conversation now plays out on blogs (although those are pretty old-school by now) and social media (Twitter, Facebook groups, etc.). We can now publicly witness in as close to real time as our social media allow what used to occur only behind the relatively closed doors of academic conferences and colloquia - and journal article rebuttals that were behind paywalls. Personally I find the current environment refreshing. It is no more or no less "mean" than it was then. Some individuals in our field truly behave in a toxic manner - but that was true back in the day. What is also refreshing that it is now easier to debunk findings and easier to do so in the public sphere than ever before. I see that not as a sign of a science in trouble, but of one that is actually in the process of figuring itself out at long last. I somehow doubt that mid-career and late-career scholars are leaving in droves because the environment now is not so comfortable. If that were the case, the job market for all the rest of us would be insanely good right now. Hint: the job market is about as bleak as it was this time last year.

A bit about where I am coming from: Right now I have my sleeves rolled up as I go about my work as an educator. I am trying to figure out how to convey what is happening in psych to my students so that they know what is coming their way as they enter the workforce, graduate school, and onward. I am trying to figure out how to go about engaging them to constructively think about what they read in their textbooks and in various mass media outlets, and to sort out what it means when classic research turns out to be wrong. I am trying to sort out how to create a more open-science friendly environment in my methods courses. I want to teach stats just a bit better than I currently do. When I look at those particular goals, it is clear that what I am wanting aligns well with those working to reform our field. I can also say from experience that my conversations have been nothing short of pleasant. And even when some work I was involved in got taken to task (I am assuming if you are reading this you know my history) nothing got said that was in someway undeserved, or untoward. Quite the contrary.

I cast my lot with the reformers - first quietly and then increasingly vocally. I decided to do so because I remember what I wanted to see changed in psychology back when I was in grad school, and I am disappointed that so little transpired in the way of reform back then. There is now hope that things will be different, and that what emerges will be a psychology that really does live up to its billing as a science whose findings matter and can be trusted. I base that on evidence in editorial leadership changes, journals at least tentatively taking steps to enforce more openness from authors, etc. It's a good start. Like I might say in other contexts, there is so much to be done.

Postscript: as time permits, I will start linking to blogs and podcasts that I think will enlighten you. I have been looking at what I have in the way of links and blogroll and realize that it needs an overhaul. Stay tuned...

Reforming Psychology: We're Not Going to Burn it Down!

This post is merely a placeholder for something I want to spend some time discussing with those of you who come here later. There has been a spirited discussion on Twitter and Facebook regarding a recent article in The Chronicle of Higher Education (hopefully this link will get you behind its paywall - if not, my apologies in advance). For the time being I will state that although I have not yet attended a SIPS conference (something I will make certain to correct in the near future), my impression of SIPS is a bit different than what is characterized in the article. I get the impression that these are essentially reformers, something that is increasingly near and dear to me, who want to take tangible actions to improve the work we do. I also get the impression that in general these are folks who largely share some things I value:

1. An interest in fostering a psychological science that is open, cooperative, supportive, and forgiving.

2. An interest in viewing our work as researchers and reformers as a set of tangible behaviors.

I've blogged before about the replication crisis. My views on what has emerged from the fallout have certainly evolved. It is very obvious that there are some serious problems, especially in my own specialty area (generally social psychology, and more specifically in the area of aggression research), and that those serious problems need to be addressed. Those problems are ones that are fixable. There is no need to burn down anything.

I'll have more to say in a bit.

A tipping point for academic publishing?

Perhaps. George Monbiot has a good opinion piece in The Guardian worth reading. This is hardly his first rodeo when it comes to writing about the distribution of wealth from the public sector to a handful of private sector publishing conglomerates. What is different is that some stakeholders, such as federal governments, grant agencies, and university libraries are pushing back at long last. Of course, there's also Sci-Hub, which is distributing what would otherwise be behind a paywall for free. I am a bit late to the party when it comes to Sci-Hub, but I can say that for someone who wants to do some data or manuscript sleuthing, it can be a valuable resource. I've had some opportunity to use that site as a means to quickly determine if a book chapter was (apparently) unwittingly self-plagiarized much more efficiently than if I waited for the actual book to arrive. When I reflect on why I decided to pursue a career as an academic psychologist, it was in large part because I wanted to give away psychology in the public interest (to paraphrase the late George Miller). Publishing articles that get paywalled is a failure to do so. I am not great at predicting the future, and have no idea what business model will be in place for academic publishing a decade from now. What I can do is express hope that whatever evolves from the ashes of the current system, it is one that does not bankrupt universities or individual researchers/labs, and that it is one that truly democratizes our work, truly makes our research available to the public. After all, our research is a public good and should be treated as such. Such an attitude really should not be a revolutionary concept.

Sunday, September 9, 2018

Just a few quick words

Hopefully my running series, Research Confidential, is of some use and interest. If so, I will keep doing more posts along those lines. My main agenda is simply to share with you how I am trying to think through some ethical difficulties and hopefully help others do the same. I am increasingly convinced that what was considered customary in our field (social psychology in general and aggression specifically) is on shaky ethical ground, and I am okay exposing what appears questionable to me as well as my own minor role in our field's current state of affairs. I have taught students over the years that methodological soundness is itself an ethical issue, and increasingly wonder how well I practiced what I preached. I think I have been sharing my concerns with you all over the last several months. I also share with my students that the process of science itself is messy. I think we should embrace the messiness that we experience rather than try to hide it. Hence my interest in taking a turn towards open science. The idea of experiments, to me is to simply experiment, to make mistakes, learn from those mistakes, and share what we learned in the process. We cannot do so by shying away when our data do not cooperate, or our theories appear to be on shaky ground and we give in to the temptation of pretending otherwise. Finally, we have to look at the way we go about publishing our findings, and our reviews of ours and others' findings. The current system is broken, and we are struggling to find a better system. As one who grew up in a culture in which the notion of simply recycling prior articles in various journals and book chapters, I find the practice abhorrent and any culture that enables that sort of behavior as broken beyond repair. To the extent that I gave in to that culture, I am deeply ashamed, and am glad that the work involved was retracted. No one, whether as a grad student, early career researcher, or obscure college or university faculty member should ever feel pressured into giving in to that sort of culture. Instead, we should break it, once and for all. Thankfully, there are young scholars who are trying to do just that. I will gladly follow their lead. They are actually doing what people like me back in the 1990s merely talked about. I am hardly happy with the state of affairs in my field, but I am optimistic. We will build a better science of social psychology, and with it a more transparent and cooperative culture.

Saturday, September 8, 2018

Paywall: The Business of Scholarship

Paywall: The Business of Scholarship (Full Movie) CC BY 4.0 from Paywall The Movie on Vimeo. I thought this was an interesting documentary. It does a decent enough job of describing a legitimate problem and some of the possible solutions. The presentation is fairly nuanced. I finished watching it with the feeling that open source journals as a standard could potentially work, but there would be some unintended consequences. I note this simply because we do need to keep some nuance as we try to figure out better ways of conveying our work to our peers and to the public. There are solutions that might sound cool, until you realize that individual scholars would have to shell out thousands of dollars to publish, which would pretty much keep some of the status quo: those researchers who have little in the way of a personal budget or institutional budget will get shut out, and the problem of transferring taxpayer money to support publicly funded research will remain in place. I don't know of easy answers, but I do know that the current status quo cannot sustain itself indefinitely.

Friday, September 7, 2018

Research Confidential: The Hidden Data

For now I will be somewhat vague, but perhaps I can get more specific if I don't think I will be breaching some ethical boundary.

One of the on-going problems in the social sciences is that although we conduct plenty of research, only a fraction of those findings ever get reported. What happens to the rest of those findings? That is a good question. I suspect many just end up buried.

It turns out I do have some experiences that are relevant. One of the problems I wrestle with is what to do about what I know. The problem is, although the data are ultimately the property of the taxpayers, the respective principal investigators are in control of their dissemination. In one case, in graduate school, the findings were statistically significant. We were able to find significant main effects and an interaction effect. The problem was that the pattern of the results was not one that lent itself to an easy theoretical explanation. The PI in this case and I puzzled over the findings for a bit before we reached a compromise of sorts. I would get to use the findings for a poster presentation, and then we would just forget that the experiment even happened. It was a shame, as I thought then, and still think now that the findings may have shed some light on how a mostly young adult sample was interacting and interpreting the stimulus materials we were using. An identical experiment run by one of my grad student colleagues in the department produced data that squared with my PI's theoretical perspective, and those were the findings that got published.

The other set of findings are a bit more recent. Here, the PI had run a couple online studies intended to replicate a weapons effect phenomenon that a student and I had stumbled upon. The first experiment failed. The use of an internet-administered lexical decision task was likely the problem. The amount of control that would have existed in the lab was simply not available in that particular research context. The other was also administered online, and used a word completion task as the DV. That also failed to yield statistical significance. This one was interesting, because I could get some convenient info on that particular DV's internal consistency. Why would I do that? I wondered if our problem was with an often overlooked issue in my field: the lack of paying attention to the fundamentals of test construction and determining that these tests are psychometrically sound (good internal consistency, test-retest reliability, etc.), leading to problems in measurement error. The idea is hardly a novel insight, and plenty of others have voiced concern about the psychometric soundness of our measures in social psychology. As it turned out in this instance, there was reason to be concerned. The internal consistency numbers were well below what we would consider minimally adequate. There was tremendous measurement error, making it potentially difficult to detect an effect if it were not there. My personal curiosity was certainly satisfied, but that did not matter. I was told not to use those data sets in any context. So, I have knowledge of what went wrong, and some insight into why (although I may be wrong), but no way to communicate so clearly with my peers. I cannot, for example, upload the data and code so that others can go over the data and scrutinize them - and perhaps offer insights that I might not have considered. The data are merely tossed into the proverbial trash can and considered forgotten. And when dealing with influential researchers in our field, it is best to go along if they are the ones calling the shots. Somehow that does not feel right.

Regrettably that is all I can disclose. I suspect that there are plenty of others who are grad students, post-docs, early career faculty, or faculty at relatively low-power colleges and universities who end up with similar experiences. In these cases, I can only hope that the data sets survive long enough to get included in relevant meta-analyses, but even then, how often does that occur? These data sets, no matter how "inconvenient" they may appear on the surface, may be telling us something useful if we would only listen. They may also tell a much needed story to our peers in our research areas that they need to hear as they go about their own work. I may be a relatively late convert to the basic tenets of open science, but increasingly I find openness as a necessary remedy to what is ailing us as social psychologists. We should certainly communicate our successes, but why not also communicate our null findings as well, or at minimum publicly archive them so that others may work with that data if we ourselves no longer wish to?

Wednesday, September 5, 2018

Do yourself and me a favor

Whatever else you do in life, please do not cite this article on the weapons priming effect. It has not aged well. I may have stood by those statements when I was drafting the manuscript in early 2016, and probably still believed it back when it came out in print. It is now badly out of date, and we should sweep it into the dustbin of history. The meta-analysis on which I was the primary author, and more importantly the process that led to understanding what was really going on with this literature, awakened me from my dogmatic slumber (to borrow a phrase from Immanuel Kant). That meta is perhaps more worth citing if relevant - and even then it will need a thorough update in a mere handful of years.