Friday, September 28, 2018

Following up on Wansink

Andrew Gelman is on point in this post. I will give you this clip as a starting point:
I particularly liked this article by David Randall—not because he quoted me, but because he crisply laid out the key issues:
The irreproducibility crisis cost Brian Wansink his job. Over a 25-year career, Mr. Wansink developed an international reputation as an expert on eating behavior. He was the main popularizer of the notion that large portions lead inevitably to overeating. But Mr. Wansink resigned last week . . . after an investigative faculty committee found he had committed a litany of academic breaches: “misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results” and more. . . . Mr. Wansink’s fall from grace began with a 2016 blog post . . . [which] prompted a small group of skeptics to take a hard look at Mr. Wansink’s past scholarship. Their analysis, published in January 2017, turned up an astonishing variety and quantity of errors in his statistical procedures and data. . . . A generation of Mr. Wansink’s journal editors and fellow scientists failed to notice anything wrong with his research—a powerful indictment of the current system of academic peer review, in which only subject-matter experts are invited to comment on a paper before publication. . . . P-hacking, cherry-picking data and other arbitrary techniques have sadly become standard practices for scientists seeking publishable results. Many scientists do these things inadvertently [emphasis added], not realizing that the way they work is likely to lead to irreplicable results. Let something good come from Mr. Wansink’s downfall.
But some other reports missed the point, in a way that I’ve discussed before: they’re focusing on “p-hacking” and bad behavior rather than the larger problem of researchers expecting routine discovery.
That is I think how we should be focusing here. This is partially about scientists engaging in questionable behavior, but the focus should not be to pillory them. Rather, we should ask ourselves about a research culture that demands we find positive results each time we run a study. News flash: we're going to get a lot of findings that are at best inconclusive if we run enough studies. We should also focus on the fundamentals of research design along with making sure that any instruments used for measurement (whether behavioral, cognitive, attitudinal, etc.) are sufficiently reliable and have been validated. When I asked in an earlier post about how many Wansinks there are, I think I would want to clarify that question with a statement: the bulk of these scientists who could be the potential next Wansink are often well-intentioned individuals who are attempting to adapt to a particular set of environmental contingencies [1] (ones that reinforce positive results, or what Gelman calls routine discovery), and who are using measures that are quite frankly barely warmed over crap. In my area of social psychology I would further urge making sure that the theoretical models we rely on for our particular specialty areas are really ones that are measuring up. In aggression research, it is increasingly obvious to me that one model I relied on since my grad school days really needs to be rethought or altogether abandoned.

As we move forward, we do need to figure out what we can learn from the case of Brian Wansink, or anyone else for whom we might encounter a checkered history of questionable findings. I would recommend focusing less on the shortcomings of the individual (there is no need to create monsters) and focus instead on the behaviors, and how to change those behaviors (both individually and collectively).

[1] I am no Skinnerian, but I do teach Conditioning and Learning from time to time. I always loved that term, environmental contingencies.

No comments:

Post a Comment