I got involved in a bit of data sleuthing a few months ago. We'll say this one is truly a team effort. I hope to talk about what we found and what resulted once we contacted journal editors, etc. For now, I can safely say that our efforts are bearing some fruit. Seeing a corrigendum to one of the problematic articles (one which was technically still in press, although published online) made my weekend. It is a start. Given that there is a bit of a pattern to the lab involved, and that we are not talking about isolated mistakes, I really hope the remaining editors act responsibly and in a timely manner. The problems with one article are relatively minor. With others, there are serious problems with data analyses as reported, poorly constructed tables, miscalculations about the number of trials in cognitive experiments, degrees of freedom that don't match the reported sample sizes, and potential self-plagiarism (the latter of which I know all too well).
For a long time I have told my undergrad methods students that peer review is a first line of defense, but it is far from perfect. Increasingly I am advocating for post-peer review, both where I have a public presence and in the classroom. I don't view this as an adversarial process - and in fact I hold nothing personally against any of the individuals authoring the articles in question. My concern, and I think anyone's concern, should be that we do our best to get it right. If independent individuals spot serious problems, we have an obligation to take those concerns seriously and work with those individuals and with our respective editors to correct whatever errors were made - for the sake of the psychological sciences.
Update: Just noticed a second corrigendum. There are easily a half dozen more to go. We'll see what happens, but it looks like journal editors are at least taking our concerns seriously.