Interesting post by Gelman recently. I am on the mailing list from where the quoted email came. It was in reference to revelations about the Zimbardo prison experiment that cast further doubt on its legitimacy. As someone watching HBO's Chernobyl series, there is something almost Soviet in the mindset expressed in that email. The thing about clampdowns is that they tend to generate further cynicism that erodes the edifice upon which a particular discipline or sub-discipline is based. If I could, I'd tell these folks that they are just making the changes they are fighting more inevitable, even if for a brief spell the life of those labeled as dissidents is made a bit more inconvenient.
The title for this and Gelman's post is inspired by a song by The Clash:
The blog of Dr. Arlin James Benjamin, Jr., Social Psychologist
Tuesday, May 28, 2019
Sunday, May 19, 2019
A Little Matter of Data Quality
A quote from Andrew Gelman:
If you have followed my blog over the last few months, you have an idea of what I've been going on about, yeah? Numbers mean squat if I cannot trust their source. Think about that the next time someone gives you an improbably claim, offers up some very complex looking tables, figures, and test statistics, and then hopes you don't notice that the tables are a bit odd, the marginal means and cell means don't quite mesh they way they should, or that there were serious decision errors. Beware especially of work coming from researchers who are unusually prolific at publishing findings utilizing methods that would take heroic team efforts to publish at that rate, let alone a single individual. Garbage data give us garbage findings more often than not. Seems like a safe enough bet.
I go on about this because there is plenty of dodgy work in my field. There is reason to be concerned about some of the zombies (i.e., phenomena that should have been debunked that continue to be taught and treated as part of our popular lore) in my field. Stopping the proliferation of these zombies at this point is a multifaceted effort. Part of that effort is making sure we can actually examine the data from which findings are derived. In the meantime, remember rule #2 for surviving a zombie apocalypse (including zombie concepts): the double tap.
So it’s good to be reminded: “Data” are just numbers. You need to know where the data came from before you can learn anything from them.
If you have followed my blog over the last few months, you have an idea of what I've been going on about, yeah? Numbers mean squat if I cannot trust their source. Think about that the next time someone gives you an improbably claim, offers up some very complex looking tables, figures, and test statistics, and then hopes you don't notice that the tables are a bit odd, the marginal means and cell means don't quite mesh they way they should, or that there were serious decision errors. Beware especially of work coming from researchers who are unusually prolific at publishing findings utilizing methods that would take heroic team efforts to publish at that rate, let alone a single individual. Garbage data give us garbage findings more often than not. Seems like a safe enough bet.
I go on about this because there is plenty of dodgy work in my field. There is reason to be concerned about some of the zombies (i.e., phenomena that should have been debunked that continue to be taught and treated as part of our popular lore) in my field. Stopping the proliferation of these zombies at this point is a multifaceted effort. Part of that effort is making sure we can actually examine the data from which findings are derived. In the meantime, remember rule #2 for surviving a zombie apocalypse (including zombie concepts): the double tap.
Monday, May 13, 2019
Causus Belli
I know I have been hard on work from Qian Zhang's lab in Southwest University for a while now. I have my reasons. I am mainly concerned with a large number of papers in which there are many serious errors. I can live with the one-off bad article. That happens. What I am reading suggests either considerable incompetence over multiple studies or something far more serious. There is a pervasive pattern of errors that is consistent across multiple articles. That I or any of my colleagues were stonewalled when asking for data is not acceptable given the pattern of results over the course of the last several years. My recent experiences have turned me from a "trust in peer review" to "trust but verify." If verification fails, trust goes bye-bye. Just how it is.
Given the sheer quantity of articles and given the increasing level of impact each new article has, I have good cause to be concerned. I am even more concerned given that well-known American and European authors are now collaborators in this research. They have reputations on the line, and the last thing I want for them is to find themselves dealing with corrections and retractions. Beyond that, I can never figure out how to say no to a meta-analysis. The findings in this body of research are ones that I would ordinarily need to include. As of now, I am questioning if I could even remotely hope to extract accurate effect sizes from this particular set of articles. I should never find myself in that position, and I think that anyone in such a position is right to be upset.
Under ordinary circumstances, I am not a confrontational person. If anything, I am quite the opposite. However, when I see something that is just plain wrong, I cannot remain silent. There is a moral and ethical imperative for speaking out. Right now I see a series of articles that have grave errors, and ones in which would lead a reasonable skeptic to state that the main effect the authors sought (weapons priming, video game priming, violent media priming) never existed. There may or may not be some subset effect going on, but without the ability to reproduce the original findings, there is no way to know entirely for sure. Not being able to trust what I read is extremely uncomfortable. I can live with uncertainty - after all a certain level of uncertainty is built into our research designs and our data analysis techniques. What I cannot live with is no certainty at all. To take an old John Lennon song (that I probably knew more from an old Generation X cover), "gimme some truth." Is that too much to ask? If so, I continue to be confrontational.
Given the sheer quantity of articles and given the increasing level of impact each new article has, I have good cause to be concerned. I am even more concerned given that well-known American and European authors are now collaborators in this research. They have reputations on the line, and the last thing I want for them is to find themselves dealing with corrections and retractions. Beyond that, I can never figure out how to say no to a meta-analysis. The findings in this body of research are ones that I would ordinarily need to include. As of now, I am questioning if I could even remotely hope to extract accurate effect sizes from this particular set of articles. I should never find myself in that position, and I think that anyone in such a position is right to be upset.
Under ordinary circumstances, I am not a confrontational person. If anything, I am quite the opposite. However, when I see something that is just plain wrong, I cannot remain silent. There is a moral and ethical imperative for speaking out. Right now I see a series of articles that have grave errors, and ones in which would lead a reasonable skeptic to state that the main effect the authors sought (weapons priming, video game priming, violent media priming) never existed. There may or may not be some subset effect going on, but without the ability to reproduce the original findings, there is no way to know entirely for sure. Not being able to trust what I read is extremely uncomfortable. I can live with uncertainty - after all a certain level of uncertainty is built into our research designs and our data analysis techniques. What I cannot live with is no certainty at all. To take an old John Lennon song (that I probably knew more from an old Generation X cover), "gimme some truth." Is that too much to ask? If so, I continue to be confrontational.
Thursday, May 9, 2019
A word about undergraduate research projects
I suppose standards about acceptable practices for undergraduate research projects vary from institution to institution and across countries. I do have a few observations of my own, based on an admittedly very, very small sample of institutions in one country.
I have worked at a university where we had the staffing and resources to give students introductory stats training and an introductory methods course - the latter usually taken during the Junior year. None of those projects was ever intended for publication, given the small samples involved, and given that students were expected to produce a finished research project in one semester. At my current university, my department requires students go through an intensive four-course sequence of statistical and methodological training. Students are required to learn enough basic stats to get by, learn how to put together a research prospectus, and then gain some more advanced training (including creating and managing small databases) in statistics and in methodology. The whole sequence culminates with students presenting their finished research on campus. That may seem like a lot, but by the time students are done, they have at least the basics required to handle developing a thesis prospectus in grad school. At least they won't do what I did and ask, "what is a prospectus?" That was a wee bit embarrassing.
Each fall semester, I write a general IRB proposal to cover most of these projects for my specific course sections. That IRB proposal is limited to a specific on-campus adult sample and to minimal risk designs. None of those projects covered under that general IRB proposal are intended for publication. Students wanting to go the extra mile need to complete their own IRB forms and gain approval. Students who are genuinely interested in my area of expertise go through the process of completing their own IRB proposals and dealing with any revisions, etc., before we even think of running anything.
Only a handful of those have produced anything that my students wished to pursue publication. To date, I have one successfully published manuscript with an undergrad, one that was rejected (it was an interesting project, but admittedly the sample was small and findings too inconclusive), and one that is currently under review. That these students were coauthors means that they contributed significantly to the writeup. That means my peers could grill them at presentation time and they could give satisfactory answers. They knew their stuff. And the reason they knew their stuff is because I went out of my way to make sure that they were mentored as they made those projects their own. I made sure that I had seen the raw data and worked together with each student to make sure data were analyzed correctly. I stick to fairly simple to accomplish personality-social projects in those cases as that is my training. That's just how we roll.
I have worked at a university where we had the staffing and resources to give students introductory stats training and an introductory methods course - the latter usually taken during the Junior year. None of those projects was ever intended for publication, given the small samples involved, and given that students were expected to produce a finished research project in one semester. At my current university, my department requires students go through an intensive four-course sequence of statistical and methodological training. Students are required to learn enough basic stats to get by, learn how to put together a research prospectus, and then gain some more advanced training (including creating and managing small databases) in statistics and in methodology. The whole sequence culminates with students presenting their finished research on campus. That may seem like a lot, but by the time students are done, they have at least the basics required to handle developing a thesis prospectus in grad school. At least they won't do what I did and ask, "what is a prospectus?" That was a wee bit embarrassing.
Each fall semester, I write a general IRB proposal to cover most of these projects for my specific course sections. That IRB proposal is limited to a specific on-campus adult sample and to minimal risk designs. None of those projects covered under that general IRB proposal are intended for publication. Students wanting to go the extra mile need to complete their own IRB forms and gain approval. Students who are genuinely interested in my area of expertise go through the process of completing their own IRB proposals and dealing with any revisions, etc., before we even think of running anything.
Only a handful of those have produced anything that my students wished to pursue publication. To date, I have one successfully published manuscript with an undergrad, one that was rejected (it was an interesting project, but admittedly the sample was small and findings too inconclusive), and one that is currently under review. That these students were coauthors means that they contributed significantly to the writeup. That means my peers could grill them at presentation time and they could give satisfactory answers. They knew their stuff. And the reason they knew their stuff is because I went out of my way to make sure that they were mentored as they made those projects their own. I made sure that I had seen the raw data and worked together with each student to make sure data were analyzed correctly. I stick to fairly simple to accomplish personality-social projects in those cases as that is my training. That's just how we roll.
Vive La Fraud?
Hang on to your hats, folks. This is just wild.
If you've never read about Nicholas Guéguen before, read this post in Retraction Watch. Nick Brown and James Heathers go into far more detail in their own blog post.
Guéguen is certainly prolific - a career that spans a couple decades has yielded some 336 published articles. He also has published some books, although I am not particularly familiar with the publisher involved. It's not that the topics of interest are a bit out of the ordinary. Spend enough time in Psychology or any related science for any length of time and you'll find someone researching something that will make you wonder. However, as long as the methods are legit and the research can withstand independent replication efforts, I would hope most of us would accept that the phenomena under investigation are at least apparently real and potentially of interest to some subset of the human species. Sort of is what it is.
Rather, it is the methodology that Guéguen employs that causes concern. In particular, his field research apparently is conducted by beginning research methods students - the vast majority of whom have a minimal grasp of what they are doing - and then published under his own name as if the work were his own (students were apparently never told that this might occur). Worse, at least one student I am aware of, based on Brown and Heathers' work, owned up to the apparent reality that students tended to fabricate results that were turned in for a grade in Guéguen's methods courses over the years. Whether or not Guéguen was aware of that is certainly a worthy question to ask. At minimum, I agree with Brown and Heathers that this is a body of research that deserves careful scrutiny.
Supposedly, two of the articles in question were supposed to have been retracted by now, but apparently are not. My limited experience with editors is that when one is the bearer of bad tidings, the tendency is to ignore for as long as possible, drag their heels even longer, and hope the problem (i.e., legitimate complaints about dodgy research) goes away. Some other things to remember - editors and publishers never make mistakes - according to editors and publishers. When the proverbial truth hits the fan, their legal beagles will provide them with whatever cover is needed to avoid accountability. Regardless, expect a situation like this one to drag on for a while. Heck, it took Markey and Elson how many years to get the original Boom! Headshot! article retracted? I am guessing that in the case of some articles I have been following, it will easily be a couple years before anything even remotely resembling satisfaction occurs. Once things go adversarial, that's just the way it is - and our incentive systems reward being adversarial. The only time a retraction might go relatively quickly (as in a few months) is if you get that one author in a blue moon who hollers at an editor and says "dude, I really made a mess of things - do something." If you find yourself in that situation (try to avoid, please), do save your email correspondence with any editor and associate publishers, publishers, etc. Document everything. You'll be glad you did.
If you've never read about Nicholas Guéguen before, read this post in Retraction Watch. Nick Brown and James Heathers go into far more detail in their own blog post.
Guéguen is certainly prolific - a career that spans a couple decades has yielded some 336 published articles. He also has published some books, although I am not particularly familiar with the publisher involved. It's not that the topics of interest are a bit out of the ordinary. Spend enough time in Psychology or any related science for any length of time and you'll find someone researching something that will make you wonder. However, as long as the methods are legit and the research can withstand independent replication efforts, I would hope most of us would accept that the phenomena under investigation are at least apparently real and potentially of interest to some subset of the human species. Sort of is what it is.
Rather, it is the methodology that Guéguen employs that causes concern. In particular, his field research apparently is conducted by beginning research methods students - the vast majority of whom have a minimal grasp of what they are doing - and then published under his own name as if the work were his own (students were apparently never told that this might occur). Worse, at least one student I am aware of, based on Brown and Heathers' work, owned up to the apparent reality that students tended to fabricate results that were turned in for a grade in Guéguen's methods courses over the years. Whether or not Guéguen was aware of that is certainly a worthy question to ask. At minimum, I agree with Brown and Heathers that this is a body of research that deserves careful scrutiny.
Supposedly, two of the articles in question were supposed to have been retracted by now, but apparently are not. My limited experience with editors is that when one is the bearer of bad tidings, the tendency is to ignore for as long as possible, drag their heels even longer, and hope the problem (i.e., legitimate complaints about dodgy research) goes away. Some other things to remember - editors and publishers never make mistakes - according to editors and publishers. When the proverbial truth hits the fan, their legal beagles will provide them with whatever cover is needed to avoid accountability. Regardless, expect a situation like this one to drag on for a while. Heck, it took Markey and Elson how many years to get the original Boom! Headshot! article retracted? I am guessing that in the case of some articles I have been following, it will easily be a couple years before anything even remotely resembling satisfaction occurs. Once things go adversarial, that's just the way it is - and our incentive systems reward being adversarial. The only time a retraction might go relatively quickly (as in a few months) is if you get that one author in a blue moon who hollers at an editor and says "dude, I really made a mess of things - do something." If you find yourself in that situation (try to avoid, please), do save your email correspondence with any editor and associate publishers, publishers, etc. Document everything. You'll be glad you did.
Tuesday, May 7, 2019
Gardening With the Professor
I have been dealing with some seriously heavy stuff as of late and in need of diversions. Finally I am finding some excuse to get away from the computer and away from scrutinizing research long enough to do something a bit more physical.
When I put a down payment on my current digs, the previous owners had a flower bed in the front yard that had some bushes, but for the most part had placed these fake plastic bushes with fake flowers around that flower bed. The realtor apparently thought that was a good idea for staging the house - the fake plants added color. Fair enough. But those do degrade over time. So too do some of the real plants. And since I live right along the western edge of North America's Eastern Forest, I get some unwanted plants in that flower bed. To give you some idea, last fall I noticed that two trees had taken root in the flower bed. Those were way too close to the house and would have done structural damage eventually. So I cut those down to stumps right before the Winter Solstice with the idea of removing the stumps and replacing them with more proper flowering plants.
About a day ago, I went by a home supply store with a nursery, bought a couple plants that I liked and brought them home. I made a mental note of where I wanted them placed. Late the following afternoon, I come home from work and errands. Of course I pick the hottest day of the calendar year, so far, to do some grueling physical work, but that is okay. Those tree stumps still needed to be dealt with, especially since I had not quite succeeded in killing those off late last fall. I cut off new limbs that were forming, and then went to work digging up the two stumps and their root systems. They were thankfully small enough trees at this point to where I could do that without having to call someone and pay for their services. Once the stumps and roots had been sufficiently removed, I went to work digging the holes where I wanted my new plants. I got both of them in place, covered the base of each with sufficient dirt, cleared up the debris and my tools. We'll just say that by the time I was done, I was definitely ready for a shower and plenty of hydration. I also made sure my new plants had some hydration.
I think they look lovely. They are Veronica "First Love" plants. The flowers are an almost flourescent violet-pink. They require partial to full sunlight and at least weekly watering. Where I live, both are doable. Hopefully they do well. In the next year or two, I will have to decide what I want to do about the rose bush in the flower bed. It is showing its age. However, it is still producing some lovely red roses. This spring has been one of its best for yielding roses. I have contemplated eventually replacing it and some boxwoods elsewhere on the property with new rose plants. I don't think you can have too many roses. Of course I will have to hire a contractor to remove the older plants. They're too established. The rest of the work should go easily enough. I'll give that a year or two. Want to save a few pennies first.
Last year, I re-established bulb plants in a back yard flower bed - including, of course tulips. There are a couple other flower beds that went bare a while back. I am still contemplating what to do with that area. I am thinking perhaps sunflowers. We've had sunflowers in the past. They are an annual plant, and usually I prefer perennials. But they are quite nice to look at, and although not completely dog-proof, they seem to do okay. We get some honey bees and bumble bees around the property, and I would like to encourage more of our pollenators to do the work they were evolved to do.
Hopefully I will have some more progress before long.
When I put a down payment on my current digs, the previous owners had a flower bed in the front yard that had some bushes, but for the most part had placed these fake plastic bushes with fake flowers around that flower bed. The realtor apparently thought that was a good idea for staging the house - the fake plants added color. Fair enough. But those do degrade over time. So too do some of the real plants. And since I live right along the western edge of North America's Eastern Forest, I get some unwanted plants in that flower bed. To give you some idea, last fall I noticed that two trees had taken root in the flower bed. Those were way too close to the house and would have done structural damage eventually. So I cut those down to stumps right before the Winter Solstice with the idea of removing the stumps and replacing them with more proper flowering plants.
About a day ago, I went by a home supply store with a nursery, bought a couple plants that I liked and brought them home. I made a mental note of where I wanted them placed. Late the following afternoon, I come home from work and errands. Of course I pick the hottest day of the calendar year, so far, to do some grueling physical work, but that is okay. Those tree stumps still needed to be dealt with, especially since I had not quite succeeded in killing those off late last fall. I cut off new limbs that were forming, and then went to work digging up the two stumps and their root systems. They were thankfully small enough trees at this point to where I could do that without having to call someone and pay for their services. Once the stumps and roots had been sufficiently removed, I went to work digging the holes where I wanted my new plants. I got both of them in place, covered the base of each with sufficient dirt, cleared up the debris and my tools. We'll just say that by the time I was done, I was definitely ready for a shower and plenty of hydration. I also made sure my new plants had some hydration.
I think they look lovely. They are Veronica "First Love" plants. The flowers are an almost flourescent violet-pink. They require partial to full sunlight and at least weekly watering. Where I live, both are doable. Hopefully they do well. In the next year or two, I will have to decide what I want to do about the rose bush in the flower bed. It is showing its age. However, it is still producing some lovely red roses. This spring has been one of its best for yielding roses. I have contemplated eventually replacing it and some boxwoods elsewhere on the property with new rose plants. I don't think you can have too many roses. Of course I will have to hire a contractor to remove the older plants. They're too established. The rest of the work should go easily enough. I'll give that a year or two. Want to save a few pennies first.
Last year, I re-established bulb plants in a back yard flower bed - including, of course tulips. There are a couple other flower beds that went bare a while back. I am still contemplating what to do with that area. I am thinking perhaps sunflowers. We've had sunflowers in the past. They are an annual plant, and usually I prefer perennials. But they are quite nice to look at, and although not completely dog-proof, they seem to do okay. We get some honey bees and bumble bees around the property, and I would like to encourage more of our pollenators to do the work they were evolved to do.
Hopefully I will have some more progress before long.
Still a bit strange
Another article from the Zhang lab was published very recently. I do have to say that the format of the manuscript is well-done compared to many of the earlier papers. There are some initial concerns I will voice now. I may come back to this one later when and if the moment arises.
The literature review notes a number of meta-analyses that purport to provide support for media violence causing aggressive outcomes. The authors do offer a quick summary of several other meta-analyses that show that the average effect sizes from media violence research are negligible. Then the authors quickly state that the evidence is "overwhelming" that violent media leads to aggression. Um....not quite. There is actually a serious debate in which the extent that any link between exposure to violent content in films, video games, etc. and actual aggressive behavior, and the impression I get is that at best the matter is far from settled. The evidence for is arguably underwhelming rather than overwhelming. But hey, let's blow through all that research and at least check off the little box saying it was cited before discarding its message.
I am not too keen on the idea of throwing participants out of a study unless there is a darned good reason. Equipment malfunctions, failures to follow instructions, suspicion (i.e., guessing the hypothesis) would strike me as good reasons. Merely trying to get an even number of participants in treatment and control condition is not in and of itself a good reason. If one is inclined to do so anyway and state that the participants whose data were examined were randomly chosen, then at least go into some detail as to what that procedure entailed.
I will admit that I do not know China particularly well, but I am a bit taken aback that 15 primary schools could yield over 3,000 kids who are exactly 10 years of age. That is...a lot. Those schools must be huge. Then again, these kids go to school in a mega-city, so perhaps this is within the realm of possibility. This is one of those situations where I am a bit on the skeptical side, but I won't rule it out. Research protocols would certainly clarify matters on that point.
I am not sure why the authors use Cohen's d for effect size estimates for main effect analyses and then use eta square for the remaining ANOVA analyses. Personally I would prefer consistency. It's those inconsistencies that make me want to ask questions. At some point I will dive deeper into the mediation analyses. Demonstrating that accessibility of aggressive thoughts mediates the link between a particular exemplar of violent media and aggression is the great white whale that aggression researchers have been trying to chase for a good while now. If true and replicable, this would be some potentially rare good news for models of aggression derived from a social cognition perspective.
It is not clear if there were any manipulation checks included in the experimental protocols, nor if there was any extensive debriefing for suspicion - i.e. hypothesis guessing. In reaction time experiments that I ran, as well as any experiments using the competitive reaction time as a measure of aggression, it was standard operating procedure to not only have manipulation checks and an extensive debriefing with each participant, as problems like suspicion could contaminate the findings. Maybe those are procedural practices that have been abandoned altogether? I would hope not.
One of the most difficult tasks in conducting any media violence experiment is ascertaining that the violent and nonviolent media samples in question are as equivalent as possible except of course level of violent content. It is possible that the cartoon clips the authors use are perfectly satisfactory. Unfortunately we have to take that on faith for the time being.
At the end of the day, I am left with a gut feeling that I shouldn't quite believe what I am reading, even if it appears relatively okay on the surface. There are enough holes in the report itself that I suspect a well-versed skeptic can have a field day. Heck, as someone who is primarily an educator, I am already finding inconvenient questions and all I have done is give the paper an initial reading. This is my hot take on this particular paper.
Assuming the data check out, what I do appear to be reading thus far suggests that these are effects that are small enough to where I would not want to write home about them. In other words, in a best case scenario, I doubt this paper is going to be the one to change any minds. It will appeal to those needing to believe that violent content in various forms of mass media are harmful, and it will be either shrugged off or challenged by skeptics. I guess this was good enough for peer review. It is what it is. As I have stated elsewhere, peer review is a filter, and not a perfect filter. The rest is left to us who want to tackle research post-peer review.
The literature review notes a number of meta-analyses that purport to provide support for media violence causing aggressive outcomes. The authors do offer a quick summary of several other meta-analyses that show that the average effect sizes from media violence research are negligible. Then the authors quickly state that the evidence is "overwhelming" that violent media leads to aggression. Um....not quite. There is actually a serious debate in which the extent that any link between exposure to violent content in films, video games, etc. and actual aggressive behavior, and the impression I get is that at best the matter is far from settled. The evidence for is arguably underwhelming rather than overwhelming. But hey, let's blow through all that research and at least check off the little box saying it was cited before discarding its message.
I am not too keen on the idea of throwing participants out of a study unless there is a darned good reason. Equipment malfunctions, failures to follow instructions, suspicion (i.e., guessing the hypothesis) would strike me as good reasons. Merely trying to get an even number of participants in treatment and control condition is not in and of itself a good reason. If one is inclined to do so anyway and state that the participants whose data were examined were randomly chosen, then at least go into some detail as to what that procedure entailed.
I will admit that I do not know China particularly well, but I am a bit taken aback that 15 primary schools could yield over 3,000 kids who are exactly 10 years of age. That is...a lot. Those schools must be huge. Then again, these kids go to school in a mega-city, so perhaps this is within the realm of possibility. This is one of those situations where I am a bit on the skeptical side, but I won't rule it out. Research protocols would certainly clarify matters on that point.
I am not sure why the authors use Cohen's d for effect size estimates for main effect analyses and then use eta square for the remaining ANOVA analyses. Personally I would prefer consistency. It's those inconsistencies that make me want to ask questions. At some point I will dive deeper into the mediation analyses. Demonstrating that accessibility of aggressive thoughts mediates the link between a particular exemplar of violent media and aggression is the great white whale that aggression researchers have been trying to chase for a good while now. If true and replicable, this would be some potentially rare good news for models of aggression derived from a social cognition perspective.
It is not clear if there were any manipulation checks included in the experimental protocols, nor if there was any extensive debriefing for suspicion - i.e. hypothesis guessing. In reaction time experiments that I ran, as well as any experiments using the competitive reaction time as a measure of aggression, it was standard operating procedure to not only have manipulation checks and an extensive debriefing with each participant, as problems like suspicion could contaminate the findings. Maybe those are procedural practices that have been abandoned altogether? I would hope not.
One of the most difficult tasks in conducting any media violence experiment is ascertaining that the violent and nonviolent media samples in question are as equivalent as possible except of course level of violent content. It is possible that the cartoon clips the authors use are perfectly satisfactory. Unfortunately we have to take that on faith for the time being.
At the end of the day, I am left with a gut feeling that I shouldn't quite believe what I am reading, even if it appears relatively okay on the surface. There are enough holes in the report itself that I suspect a well-versed skeptic can have a field day. Heck, as someone who is primarily an educator, I am already finding inconvenient questions and all I have done is give the paper an initial reading. This is my hot take on this particular paper.
Assuming the data check out, what I do appear to be reading thus far suggests that these are effects that are small enough to where I would not want to write home about them. In other words, in a best case scenario, I doubt this paper is going to be the one to change any minds. It will appeal to those needing to believe that violent content in various forms of mass media are harmful, and it will be either shrugged off or challenged by skeptics. I guess this was good enough for peer review. It is what it is. As I have stated elsewhere, peer review is a filter, and not a perfect filter. The rest is left to us who want to tackle research post-peer review.
Subscribe to:
Posts (Atom)