The blog of Dr. Arlin James Benjamin, Jr., Social Psychologist
Wednesday, February 12, 2020
"Less than lethal weapons effect" - a quick thought
Last year, an article by Ariel et al. (2019) landed on my radar. It's of interest to me simply because it does have some relevance to weapons effect research. In summary, police officers are randomly assigned to patrol with or with tasers present. Most of their analyses concentrated on the behavior of the officers. For my purposes, what was interesting was the analysis in which the behavior of suspects was examined - more specifically if there was a difference in propensity for suspects to attack police officers who had tasers than those who did not. The authors found that suspects were significantly more likely to attack an officer carrying a taser than an officer who was not. It's counter-intuitive, for sure. Strikes me as a great way to end up feeling the effects of a taser, as well as face additional charges. Then again, we humans are not necessarily rational animals. So there's that. Thing is that the raw numbers are really not much to write home about. Almost no one in the sample attacked officers in either condition. So the raw numbers were small. The proportions per thousand strike me as underwhelming. So just for kicks, I did a quick and dirty effect size calculation. I had a raw B weight and I had the SE and sample size, so estimating SD was fairly straightforward (SE * sqrtN). I divided the raw B by the estimated SD and ended up with a Cohen's d of 0.176. In other words, this is a fairly small effect. The finding is statistically significant. I have questions of its practical significance. On the positive side, at least this was an effort to test the hypothesis that the mere presence of a weapon (in this case a taser) could influence some form of aggressive behavior in an ecologically valid manner. I'm admittedly pretty jaded about the weapons effect as a phenomenon, but at the very least the research Ariel et al. (2019) conducted showed us a way forward: get out of the lab and get into everyday situations and see if there really is something of interest. We may end up realizing there isn't and likely never was. Or we may end up surprised to find that there was some substance to the old Berkowitz and LePage (1967) experiment. Time will tell.
Thursday, January 30, 2020
While we're on the topic of fraud...
...maybe read this too. In fact, definitely read this, too. So much gets missed in the peer review process, and a fraudster really doesn't need to be especially brilliant to get findings based on dodgy or non-existent data into the body of published findings. I'm a much better peer reviewer, but mostly because I use some recently developed tools of the trade in post-peer review. That said, I use those same post-peer review tools whenever I read much of anything anymore. And yet I know those can only do so much. We're dealing with an entrenched culture that makes it difficult to actually change the behavior. In the meantime, I have seen a series of errata and a corrigendum from a particular lab that are just as troubling as the original papers - in some cases more troubling. And there seems like there is so little to be done about it. That is one thing keeping me up at night.
One postscript - Joe mentions one particular paper where the data were indeed troubling but in which it took the dogged effort of Pat Markey (and later Malte Elson) to get access to the data to try to reproduce it. A retraction followed. Like Joe, I didn't catch it either. Nor would I in the peer review process had I served as a peer reviewer.
One postscript - Joe mentions one particular paper where the data were indeed troubling but in which it took the dogged effort of Pat Markey (and later Malte Elson) to get access to the data to try to reproduce it. A retraction followed. Like Joe, I didn't catch it either. Nor would I in the peer review process had I served as a peer reviewer.
Another Stapel situation?
It's hard to say right now. What is clear is that another PI has seen a couple of his papers retracted due to data irregularities. His former students, post-docs, and coauthors are doing the right thing. At the end of the day, that's what matters. One thing to keep in mind: when we start looking at cases of potential fraud in scientific research, the people most affected are early career researchers (ECRs): grad students and post-docs in particular. With fewer lines on a CV, any retraction will have disproportionate repercussions any time they are on the job market, applying for tenure, etc. All of us, though are negatively affected, to the extent that any article based upon fraudulent (or fabricated) data, to the extent that policy,health, and other personal or professional decisions are based upon that research. The old Russian saying "trust but verify" is seeming more apt all the time. Maybe I'd leave the "trust" part out and just say "verify."
Tuesday, January 14, 2020
Post-script to the preceding
At the end of the day we as scientists need to discourage efforts to hijack our efforts to address legitimate problems in some of our scientific fields in the name of promoting pseudoscience or engaging in denial of empirically reproduced, replicated, and verified evidence of specific phenomena. It has been a long time since I had any exposure to the work "Merchants of Doubt", which documented the tobacco industry's decades-long (and for a while successful) effort to cast doubt on what was increasingly overwhelming evidence of negative health outcomes associated with using tobacco products. There are merchants of doubt now targeting other phenomena - climate change, immunizations for childhood diseases, etc. We as scientists have scarce resources. The less legitimacy we give to these contemporary merchants of doubt, in the form of conference attendance, membership dues, etc., the better. For me, that is easy. I only get partially funded for one conference per fiscal year. I have to make that conference count for something.
Sunday, January 12, 2020
When a conference on the crisis in confidence is not what it appears
Read Dorothy Bishop's blog on the matter here. Not every organization that appears on the surface to show interest in improving the replicability and reproducibility of scientific results is actually interested in doing so. Anything sponsored by the National Association of Scholars (which apparently uses the same acronym as the genuinely respected National Academy of Sciences) is best treated with some suspicion. The choice of acronym alone seems designed to confuse. Beyond that, Dr. Bishop does a good job of exposing this organization's political agenda. In other words, this bunch is interested in improving science as much as it is protecting a particular political perspective in line with the current occupant in the White House with regard to climate science.
Wednesday, January 1, 2020
Happy New Year!
I wish to thank each of you who has visited over the last few years and to wish each of you a wonderful 2020.
My plans for this blog in 2020 are simple enough. I still have some unfinished business with research from a very error-prone lab, I want to continue to examine allegiance effects in weapons effect research, and since this is an election year, I expect I will spend a bit of time offering a few pointers on media literacy. The last bit will likely be a bit of an update of some posts from late 2016 as I was processing the aftermath of the US Presidential election. Expect to see posts advocating for reform in my corner of the sciences. Expect new links - in particular ones to resources for those who value post-peer review as much as I do.
In the meantime, let's enjoy a moment to celebrate and to enjoy those things that truly matter. On second thought, let's be generous with the time that we spend enjoying those things that truly matter.
Onward.
Friday, December 27, 2019
Data sleuthing made easy
You all know that I have done just a bit of data sleuthing here or there. I do so with no real fancy background in statistics. I have sufficient course work to teach stats courses at the undergraduate level, but I am no quantitative psychologist. So, I appreciate articles like How to Be a Statistical Detective. The author lays out some common problems and how any of us can use our already existing skills to detect those problems. I use some of these resources already, and am reasonably adept at using a calculator. I will likely add more links to these resources to this blog.
This article is behind a paywall, but I suspect my more enterprising readers already know how to obtain a copy. This article is fundamental reading.
This article is behind a paywall, but I suspect my more enterprising readers already know how to obtain a copy. This article is fundamental reading.
Subscribe to:
Posts (Atom)