That is a question I ask myself quite a bit. Actually the answer is fairly mundane. As the saying goes" "I've seen stuff. I don't recommend it."
Part of the lived reality of working on a meta-analysis (or the three I have worked on) is that you end up going down many of your research area's dark alleys. You see things that frankly cannot be unseen. What is even more jarring is how recently some of those dark alleys were constructed. I've seen it all: overgeneralization based on small samples relying on mild operational definitions of the variables under consideration, poorly validated measures, the mere fact that many studies are inadequately powered, and so on.
If you ever wonder why phenomena do not replicate, just wander down a few of our own dark alleys and you will understand rather quickly. The meta-analysis on the weapons effect, for which I was the lead author, was a huge turning point for me. Between the allegiance effects, the underpowered research, and some questions about how any of the measures employed were actually validated, I ended up with questions that had no satisfactory answer. I've been able to show that the decline effect that others had found when examining Type A Behavior Pattern and health outcomes also applied to aggressive behavioral outcomes. I was not surprised - only disappointed in the quality of the research conducted. That much of the research was conducted at a point in time in which there were already serious questions about the validity of what we refer to as Type A personality is itself rather disappointing. And yet, that work persisted for a while, often with small samples. I have also documented in my last two meta-analyses duplicate publications. Yes, the same data sets manage to appear in at least two different journals. I have questions. Regrettably, those who could answer are long since retired, if not deceased. Conduct a meta-analysis, and expect to find ethical breaches, ranging from potential questionable research practices to outright fraud.
That's a long way of saying that I get the need for doing whatever can be done to make what we do in the psychological sciences better: validated instruments, registered protocols and analysis plans, proper power analyses, and so on. There are many who are counting on us getting it as close to right as is humanly possible. Those include not only students, but the citizens who fund our work. There is no point to "giving away the science of psychology in the public interest (as George Miller would have put it) if we are not doing due diligence at the planning phase of our work.
Asking early research professionals to shoulder the burden is unfair. Those who are in more privileged positions need to step up. We need to be willing to speak truth to the teeth of power, otherwise there is no point in us even continuing, as all we have is a pretense with little substance. I wish I could say doing so would make one more marketable and so on. The reality is far more stark. At minimum we need to go to work knowing we have a clean conscience. Doing so will maintain public trust in our work. Failure is not something I even want to contemplate.
So I am a reformer. However long I am around in an academic environment, that is my primary role. Wherever I can support those who do the heavy lifting, I must do so. I have undergraduate students and members of the public in my community counting on it. In reality, we all do.