Wansink did not just do some hokey experiments that were somewhat eye-catching. He appeared on various morning news shows plugging his lab's findings, in the process fooling the public. His lab's reported findings were used by policymakers, and although perhaps the fact that those findings are in question is not quite life and death, they certainly did not benefit the public interest. Here is a tweet that gives you some idea of how policymakers used his research (from a plenary speech given at SPSP 2012):
It will be tempting to write off Wansink as a guy who did flashy-but-silly studies. Don't. He was taken seriously by scientists, policymakers, and the public. Here is his bio from an invited plenary address he gave to an audience of peers (SPSP 2012) https://t.co/wuu59LRCcL pic.twitter.com/dWIpMiIsxr— Sanjay Srivastava (@hardsci) September 20, 2018
The sleuths who did the grunt work to discover problems with Wansink's work will never be thanked. They will never receive awards, nor will they be able to put those efforts on their respective CVs. But we all owe them a huge debt of gratitude. For the record, they are Nick Brown, James Heathers, Jordan Anaya, and Tim van der Zee. They have exposed some questionable research practices at great risk to their own careers. Perhaps more to the point, they have exposed Wansink's research practices as symptomatic of an academic culture that privileges quantity over quality, an emphasis on appearing in high impact journals, statistically significant findings over nonsignificant findings, research that can be used as clickbait, and secretiveness. That broader culture is what needs to be changed. As James Heathers would no doubt argue, we change the culture by using the tools available to detect questionable practices, and to rethink how we do peer review - and making certain that we instruct our students to do likewise. We need to be more open in sharing our methodology and our data (that is the point of registering or preregistering our research protocols and archiving our data so that our peers may examine them). We need to rethink what is important in doing our research. Is it about getting a flashy finding that can be easily published in high impact journals and net TED talks, or are we more interested in simply being truth seekers and truth tellers, no matter what the data are telling us? How many publications do we really need? How many citations do we really need? Those to me are questions that we need to be asking at each step in our careers. How much should we demand of editors and authors as peer reviewers? Why should we take the authors' findings as gospel? Could journals vet articles (possibly using software like SPRITE) to ascertain the plausibility of the data analyses, and if so, why are they not doing so?
There is some speculation that had Wansink not made that fateful blog post in December of 2016, he would still be go about business as usual, and he would never have faced any repercussions for his faulty research. That is a distinct possibility. A more optimistic case can be made that the truth would have caught up to him eventually, as the events that led to the replication crisis continue to unfold, and as our own research culture is one that is more in tune with rooting out questionable work. Maybe he would not be retiring at the end of the spring term of 2019, but a few years later - still under a cloud. I also wonder how things might have played out if Wansink had tried a different approach. When his research practices were initially challenged, he doubled down. What if he had cooperated with the sleuths who wanted to get to the truth about his findings? What if, faced with evidence of his mistakes, he had embraced those and taken an active role in correcting the record, and an active role in changing practices in his lab? He might have still ended up with a series of retractions and faced plenty of uncomfortable questions from any of a variety of stakeholders. The story might have had a less tragic ending.
This is not a moment for celebration, although there is some comfort in knowing that at least the record in one area of the sciences is being corrected. This is a moment for reflection. How did we arrive at this point? How many more Wansinks are in our midst? What can we do as researchers, as peer reviewers, and in our capacity to do post-peer review to leave our respective areas of the psychological sciences just a bit better than they were when we started? How do we make sure that we actually earn the trust of the public? Those are the questions I am asking myself tonight.
No comments:
Post a Comment