Monday, September 29, 2025

About that cyberbullying study: Something is still not right

A few months ago, I made note of concerns appearing on PubPeer about a cyber-bullying article. The good news is that the lead author did offer a revised table to make the information more readable. The bad news is there are some problems with that table that still have not been discussed.

As you will notice, this updated table is considerably more readable, so kudos to the authors for that. However, there are still some lingering concerns. The good news is that the reader knows specifically the breakdown between the number of participants who noticed and who did not notice the bullying. It was what I had sussed out from reading the report in sufficient detail, which was comforting. The test statistics, however, are still a bit odd. Now with a total sample of 221 (N=221, for those who need that spelled out), I would have expected a different number for degrees of freedom (df) for these t-test statistics. Total degrees of freedom can be computed on the back of a napkin: N-2. So if there were 221 total participants, N-2=219, not 217. So I do have some questions there. My charitable take is that this revised table was hastily drawn up and a simple typo occurred. It happens to all of us. It is not entirely clear what the confidence intervals refer to. If those are associated with a Cohen's d statistic, why not report the Cohen's d as one would ordinarily do? The Chi-Square statistic is presumably statistically significant, so why not include at the bottom of the table that * denotes p <.05? The t-tests in the table are reported as positive, but in the corresponding paragraph the same t-values are reported as negative. If the authors have reason to believe this does not matter, they need to explain why it doesn't matter. I'd certainly be intrigued to read the argument that would be needed to justify such a perspective.

Since there had been concerns expressed earlier about some inconsistencies in the reporting of means and standard deviations in the original published article, and since there appeared to be at least one effort to correct an incorrectly reported mean in the updated table, I decided to do a bit of a deeper dive, using SPRITE as a way of exploring whether or not the means and standard deviations would be mathematically possible. Please keep in mind this very important disclaimer: the findings I am reporting should not be taken as the gospel. I am merely noting a couple areas of concern that I would strongly advise the authors, the journal editor, and of course any data sleuths to follow up on posthaste. I am also making an assumption I feel uncomfortable making: that the means and standard deviations are based on single items in which the data are integer data. The latter I am confident about, as the test items involved are Likert-scale items. The former is merely an assumption as the research report does not make the number of items for each DV explicit. If I am wrong, then I am prepared to admit it. With that in mind, let's dive in, shall we?

What you will see below are SPRITE results for each cell mean and standard deviation as reported in the above table. Let's see what I discovered. 

The above image is for the following cell: Noticed/Chat. The mean is 1.26, and the SD is 0.85. I had Sprite generate nine possible distributions (the default setting). This one checks out in terms of at least the mean and SD being mathematically possible.

This next screen shot is for the cell Notice/Bully. The reported mean is 1.19, SD = 1.55. There appears to be a problem, however: that SD is not possible according to the Sprite analysis. The maximum possible SD is 0.72. 

Our next image above is for the cell Not Noticed/Chat. The mean is 1.82, SD = 0.54. Sprite was only able to generate five possible distributions, but at least on the surface this looks like a plausible mean and SD.

Finally, this is for the cell Not Noticed/Bully. The reported mean is 2.46, SD = 1.58. Again, Sprite's analysis is that the target SD is too high, and that the maximum SD = 1.50. 

The standard deviation for two of the four cells is simply not possible as reported in the paper. That is discomforting for any meta-analyst who might wish to extract effect sizes using reported mean and standard deviation data. One could potentially get around that with reported t-tests, I suppose, if we had any confidence on which reported df were true, and if we knew the intended sign for each t-test. 

Keep in mind that the Sprite analyses are only preliminary. Don't take these as the last word. I am basing the Sprite analyses on the assumption that each scale used as a DV is a single item scale, which is reasonable enough absent any other information about the measures used for the DVs. If there are multiple items for either of the DV measures, I will need to re-run the Sprite analyses accordingly. I am also basing the Sprite runs on the reported sample sizes for each scale. If those were reported incorrectly, then these analyses are effectively moot. Those who possess more fine-grained skills than I could likely confirm my work or find more concerns that I am missing. We do the best we can within our limits. The purpose of this post is simply to ask questions, in the hope that we get to the truth, or at least the closest approximation thereof.
 

Monday, September 22, 2025

Research Confidential: Burying Findings that are Inconvenient for the Theory

It's been a while since I last posted a Research Confidential. For those who wonder about the inspiration for that occasional post heading, credit the late Anthony Bourdain's book Kitchen Confidential. Much like Bourdain did in that book, I try to expose some of the dark alleys that one might encounter in social psychological research. My prior post reminded me of a situation where some of my own data ended up buried by my then-advisor while I was working on my PhD.

Let's set the scene a bit. The work I was heavily involved in as a PhD student dealt with media violence (broadly defined) and some of the routes to aggressive behavior as described in the General Aggression Model (or as it was then known, the General Affective Aggression Model). I was at University of Missouri's flagship campus in Columbia, MO (go Tigers!) at the time, and Craig Anderson was my advisor before he headed off to greener pastures at Iowa State University. None of that is a secret, of course. Evidence of my time at Mizzou can still be found on my CV, my Google Scholar profile, etc. Much of the work I was involved in at the time would not get published until the mid-aughts, which wasn't exactly great for my career prospects at the time, but that's another matter for another time. 

Not every experiment I conducted worked out. That goes with the territory, but I had a pretty decent track record during my time in Anderson's lab, and later on under Ann Bettencourt's guidance. For now let's stay focused on my time with Anderson, who was my advisor until he moved to Iowa State. He and I discussed an experiment in which we would examine if violent content in rock and rap lyrics would prime aggressive cognition (i.e., make aggressive thoughts more accessible) and anger. In other words, we were going to test the cognitive and affective routes to aggression when individuals were exposed to violent lyrical content. We also included a humor manipulation although that had no bearing on the findings. Our dependent variables were a reaction time task called the pronunciation task, in which participants would see an aggressive or non-aggressive word and then say that word aloud into a microphone, at which point the computer would record reaction time in milliseconds (lower milliseconds for aggressive words versus neutral words would indicate more accessibility of aggressive cognition); and a state hostility scale that was essentially a measure of anger. A fellow research assistant, Janie Eubanks, ran a similar experiment using a different measure of aggressive cognition.

The experiment was a success. The main effect of aggressive lyrical content was statistically significant for both dependent variables. So far, so good. However, there was a significant interaction with participants' self-reported sex. Women showed quicker reaction time to aggressive words relative to neutral words than did men in the aggressive lyric condition. A similar pattern was found for anger. Now, extant aggression theories postulated that men were more prone to have higher accessibility of aggressive cognition and more prone to respond angrily than women. I was finding the opposite pattern in the data. So when Anderson and I looked at the findings, he was not especially thrilled. I had taken the liberty, as instructed, to compose a write-up of the findings, expecting that this would be a paper to be submitted for publication. It was organized in such a way that we divided the results for the cognitive route and affective route as separate "experiments" (a form of salami slicing, which is problematic in its own right, and which I have refrained from since I got out of grad school). The real problem for Anderson was that the pattern of findings did not conveniently fit the theoretical model, and so he didn't want us to go further with reporting. Since I am a bit of a pest, he eventually threw me a bone and allowed me to present the findings at a conference (probably Midwestern Psychological Association, or something along those lines). And that was it. Janie Eubanks' parallel experiment was more in line with theory, and it got published a few years later (for the record I am proud of Janie's accomplishment). 

I had a fairly plausible explanation for the findings: The songs in both the aggressive and non-aggressive conditions were all performed by men, and in the case of the aggressive songs, let's just say the clips we played were probably a bit edgy. Perhaps female participants simply found the aggressive songs offensive and that could have led to them being showing more anger and greater accessibility of aggressive thoughts than the male participants in the same treatment condition. It's hard to know for sure, of course, as our debriefing protocols did not probe for the extent to which individuals would have been offended by the lyrical content, and that was something that in its own right would have been worth following up with another experiment. But that was not to be. The conclusion I was drawing was hardly profound, but it did point to a difficulty in conducting media violence research that deserved more attention - if nothing else than to encourage colleagues to do more pilot testing prior to running comparable experiments to the ill-fated lyric experiment I conducted. That could have led to a productive conversation, if my advisor had been interested, and who knows - perhaps the quality of media violence research would have been better as we approached the dawn of the 21st century. We'll never know. Not every experiment needs to be a theoretical breakthrough, and there's no law saying that all data analyses that we report have to conveniently align with a theory. This is hardly a profound insight, but human behavior is messy, and the endeavor to study human behavior objectively as humans is inevitably messy. Why not embrace that, I wonder now, as I did over a quarter century ago.

*Note: salami slicing is a practice of chopping up a single study's worth of data and analyses into multiple studies, either to give the illusion of a multi-study paper (editors at the time loved those) or multiple individual papers to inflate publication numbers. As a practice, salami slicing is not even considered an ethical gray area. It is unethical, and I advise early career researchers to refrain from such practices. 

Burying inconvenient findings (another example of Maier's Law)

One thing you will probably notice as you look through this blog is that I am no fan of burying inconvenient findings. It doesn't matter if it is the state or federal governments doing it or fellow researchers. The bottom line remains the same: the intended audience gets a distorted and one-sided view of the phenomenon under consideration, losing out on often important and crucial information in the process. The US federal government recently removed a thorough narrative review of research on terrorism. You can see the archived report for yourself here, courtesy of the Wayback Machine. I also made a pdf file of the archived document and will at some point upload it to my personal website, just in case the Wayback Machine ever goes away. I figure my tax dollars were used to generate the report, along with the data analyzed in the studies reviewed in that report. Why was it pulled? Its findings did not go along with the current government narrative that instructs us that all terrorism and politically motivated violence comes from left-wing groups. It turns out that the most common culprits when it comes to terrorism are tied to right-wing militias, followed by Islamist groups. Left-wing terrorism is barely existent in the US. To paraphrase N.R.F. Maier, "if the facts don't conform to the theory, ignore them." The consequences to me are obvious: by ignoring the facts, the federal government will be responsible for law enforcement failing to detect an imminent terrorist attack, or to go down blind alleys looking for non-existent left-wing terrorist. Those consequences could be devastating for lives and livelihoods. 

Now, remember: I was once an early career researcher. I know what it is like when those who have more power over me make decisions to bury the findings of experiments I had run because the findings did not fit an advisor's pet theory. I found that upsetting back in the late 1990s (even if the stakes in that particular line of research are fairly low), and I still do today. In the sciences we are supposed to be truth-seekers and truth tellers. To do so successfully means reporting findings that don't mesh with our preferred worldview. I don't have high expectations when it comes to truthfulness from elected officials (politicians tend to be generally useless in that regard to varying degrees), but I do hold out high expectations for the federal agencies that are supposed to be staffed by career professionals who can report their findings independently of the current dominant party line at any given time. When those professionals are prevented from reporting their work, we should all be worried. 

Update: I failed to mention that the report the government buried in this instance is one that would be of some professional interest to me, insofar as there is some evidence that individuals with authoritarian attitudes (as is the case with our own homegrown terrorists) tend to show greater acceptance of any of a number of authority-sanctioned acts of aggression and violence. I am basing that assessment on much of Bob Altemeyer's work when he was still an active researcher and some of the work I published in the aughts and last decade.  


Saturday, September 20, 2025

Welcome, new readers

I logged off for a couple months, come back to post a couple new entries, and realize that suddenly the readership numbers are well above anything I've experienced. It took this blog about 13 years to notch its first 250,000 unique visitors, and it may take less than a few months to have 250,000 more. That's an unexpected and hopefully pleasant surprise.

I don't know how much word of my working on a book has spread, but if so, the secret's out. It's still very much in the beginning stages. I still have a little more groundwork to complete before I really get rolling, of course. And naturally, I am going to need a bit of release time and a sabbatical to really get the book to completion. Maybe it'll be a pretty good story. We'll see. If nothing else, I think it might be a bit of a cautionary tale about one facet of the social psychology literature. And cautionary tales tend to be the best ones to learn from, at least in my experience. I will share more about that project in due time.

I suppose I will feel more obligated to add content here. I have posted irregularly for the existence of this blog, and I've always been surprised that it had any readership. I suppose I only post when I believe I have something to say, and I do go through long stretches where my muse is nowhere to be found. I doubt that will change too much. In that sense, I am very much set in my ways at this final phase of my career. The one thing that has changed is that I went from accepting the orthodoxy in my area of specialization to questioning that orthodoxy, to ultimately striking out on a different path altogether (that's the phase you have stumbled upon). 

Anyway, I've always been a bit reclusive and not much one for the spotlight, so bear with me as I try to get used to a bit more attention than I have had before. Perhaps what I have to say going forward will keep you sticking around. Perhaps not. We shall see. There may still be some value left to being one of a small handful of psychologists who have studied the weapons effect (something Berkowitz and LePage pioneered around the time I was born), and arguably one of the last who still actively contributes to that specific research area. I've certainly had a few things to say on the matter in the past, and have more to share in the future. It was, after all, the phenomenon that shaped the most significant moments of my career, and it has consumed my time, energy, and almost my spirit. Pull up a chair. I'll be around.

Friday, September 19, 2025

Postscript to the preceding

 Per my last post: not only were there plenty of unsubstantiated assertions throughout this latest MAHA report, but this especially caught my eye:

 Asked about rising gun deaths in children, Kennedy called it “a complex question” and claimed — without evidence — that psychiatric drugs and video games could be among the reasons for gun violence.

That bit caught my eye since it truly stuck out like a sore thumb. The video game and aggression assertion is one with which I have long been familiar. I may be a minor player in the media violence research space, but I do keep up with the literature (and I do still occasional publish new work on the weapons effect or weapons priming effect, which is at least adjacent to research on video games and other media). There is no evidence that violent video games have any connection to gun violence. Markey and Ferguson spend some time debunking that claim in their book Moral Combat, and although published several years ago, is still current enough for me to cite it. It's unclear even if there is all that much of a causal link between playing violent video games and the sorts of mild aggression we can ethically measure in your typical psychology lab. 

Unfortunately, this is apparently the best the US federal government can do at this point in time. That does not bode well for policy decisions nor does it bode well for the trustworthiness of its assertions regarding any social science claim. At this point, my recommendation is that we can safely dismiss claims about video games, for example, made by the US federal government, as they will likely be made up without any supporting empirical evidence, and given this government's lack of transparency, I would not even trust any alleged empirical evidence I could attempt to concoct. This is the sad state of science in the US. It did not have to be this way.

As an aside, I will note that much of the media violence research space is and has been needlessly politicized for a very long time. Those responsible for politicizing that research know who they are. What has been accomplished, as near as I can tell, is that policymakers will simply cherry-pick the studies that fit their preconceived conclusions. We have a responsibility now more than ever as scientists to follow the data where it leads us and to accept the findings we obtain, whether or not they fit our particular pet theories. I'm just some obscure social psychologist. I don't have the platform nor do I have the sort of editorial power to make sure that media violence research is done competently and honestly, but I hope those who do have that sort of influence use that influence wisely. If nothing else, we can be there as truth seekers and truth tellers to point to the facts when our government has given up on the truth. Hey, but on the bright side, at least the latest MAHA report didn't have fake citations and might have been cobbled together by actual persons instead of relying entirely on AI (yes, I know - gallows humor).

PS: Here is a screenshot that summarizes the state of research on media violence from FORRT's website:

 


Wednesday, September 17, 2025

Yet another MAHA report and yet another fail

Let's call this a follow-up post to the AI-garbled mess of a report RFK Jr. put out this past spring. That one included fake citations. The good news is that RFK Jr. has learned his lesson. The most recent report with a plethora of recommendations supposedly based in sound science includes no citations at all. I guess that solves the problem, eh? This is what happens when political appointees who have no idea what they are doing try to generate papers to argue their positions. The AI-generated report from May would be eventually flagged as fraudulent and retracted, assuming it survived the peer-review process (there's always a chance it would have). This new report would not even receive a passing grade in a freshman-level course. It turns out that professors and instructors want to see claims and recommendations backed up with evidence, including citations. Editors would want that too - and whatever RFK Jr. and his band of idiots churned out this time would have received a desk rejection in all likelihood. Anyway, this is another example of our tax dollars at work. Sigh.