This week I am dedicating mostly to decompressing. I turned in my grades Monday at both my home campus and at the one where I adjunct. I chaired what I hope is the final curriculum committee meeting for the academic year and completed the paperwork that comes with chairing that particular committee, and went to an advising session for incoming first-year students (and will attend one more later this week). Mostly though I have been resting, catching up on some personal reading, and just spending time with family. I am also doing a bit of reflecting on the past semester and going over what went right and what could use some improvement.
Teaching about replicability is a bit new for me. I have broached the topic a bit here and there, but it really wasn't until last fall semester that I really devoted considerable time to this important topic in one of my upper level methodology courses. This spring, I worked a bit more systematically at weaving in facets of the broader conversation on replicability into all of my courses. In Social Psychology, I added notes and discussion regarding classic studies that appear to be little more than zombies (ego depletion and IATs top my list). In statistics, I am probably looked at as "old school" to the extent that I rely on null hypothesis testing. I am increasingly emphasizing the importance of looking at effect size information, and statistical power in my presentations and demonstrations. If I can get the university to agree to it, I will likely download some freeware (that I know is safe) to show students how to determine the samples they need to achieve sufficiently powerful experimental research. Until then, my discussions will have to be a bit more conceptual. Effect size calculations are easy, and I am routinely including that discussion as we examine each new hypothesis testing technique, emphasizing the importance of replication along the way. For the most part, I haven't received complaints from students for doing so, even if that means deviating from textbook material. Actually, students who get to know me well notice that I often view whatever textbook I am using with a certain amount of contempt, so they expect a certain amount of my class time to be "off the books" as it were. I'm still fairly minimal in my treatment regarding replicability issues in my lower-level courses, but at the Senior level, we spend about a third of the term on nothing but replicability - my Capstone section is a good example. In that course students read the original Open Science Collaboration article from a few years ago, along with several other articles, and they are expected to write critically about what they are reading. I am still experimenting with how to assess their knowledge, and after reading their final papers this semester, will probably drastically alter the nature of that final paper to force the issue, given its importance. I have not yet seen the course evals. Hopefully those give me some direction - although my experience has been that evals tell me little that is useful. The more helpful feedback comes from peers. Certainly any suggestions for readings are always welcome.
I am still struggling to figure out how much replicability material I can expose students to at the lower levels without being overwhelming. Some suggestions in that regard would be helpful as well. The upshot is that I did a lot of rebooting my courses. I think I was a bit underwhelming in my first attempt to reboot my social psych course, and hopefully with some time over the summer will have that one a bit more to my satisfaction. If I had my way, I'd ditch the textbook and rely strictly on open educational resources for that course, but because of an agreement I made with a colleague who also teaches the same course, I will have to refrain for now, pointing out the zombies as I go along.
One of the most inspiring professors I encountered as an undergraduate was a gentleman named Edward Stearns. By the time I started taking classes from him, he was close to retirement. What was fun about him was that although he appeared somewhat old-school, he made one hell of an effort to stay current. I doubt he ever published a manuscript during his career, but he was always toying around with new software and teaching us stuff that was, for the time, quite cutting edge. He could even use contemporary slang correctly. Increasingly, I want to be that person. I will undoubtedly continue to have my flaws as a person and as a researcher, but I do want to model the importance of learning from one's mistakes and of always trying out new techniques and ideas, in many ways learning with my students in the process. I think that there is a tendency among those of us who are mid-career, and definitely among senior educators and scholars to get set in their ways and become blind to the changes that are occurring around them. I'm trying a different path, and it is one that is increasingly leading me to side with some relatively young skeptics and "data thugs" in the process. Where that path leads is uncertain, but I hope it is one that enables me to be a better advocate of my particular science to my students and a better researchers during my remaining couple decades of work ahead.